-
Notifications
You must be signed in to change notification settings - Fork 62
LCORE-1166: store tool calls and tool results in cache entry #984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Stephanie <yangcao@redhat.com>
WalkthroughTool call/result metadata fields ( Changes
Sequence Diagram(s)(omitted — changes are schema/propagation-focused and do not introduce a new multi-component control flow requiring a sequence diagram) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Stephanie <yangcao@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (10)
src/cache/sqlite_cache.py (6)
13-14: Fix ungrouped imports to resolve linter warning.The import from
utils.typesshould be grouped with otherutilsimports to satisfy the ungrouped-imports linter rule.♻️ Suggested fix
Move line 13 to be adjacent to line 15 (after
from log import get_logger):from models.config import SQLiteDatabaseConfiguration from models.responses import ConversationData, ReferencedDocument -from utils.types import ToolCallSummary, ToolResultSummary from log import get_logger from utils.connection_decorator import connection +from utils.types import ToolCallSummary, ToolResultSummary
248-253: Fix implicit string concatenation in logger warning.The linter flagged implicit string concatenation. Use a single string instead.
♻️ Suggested fix
except (json.JSONDecodeError, ValueError) as e: logger.warning( - "Failed to deserialize tool_calls for " "conversation %s: %s", + "Failed to deserialize tool_calls for conversation %s: %s", conversation_id, e, )
264-269: Fix implicit string concatenation in logger warning.Same issue as above - use a single string.
♻️ Suggested fix
except (json.JSONDecodeError, ValueError) as e: logger.warning( - "Failed to deserialize tool_results for " "conversation %s: %s", + "Failed to deserialize tool_results for conversation %s: %s", conversation_id, e, )
333-338: Fix implicit string concatenation in logger warning.♻️ Suggested fix
except (TypeError, ValueError) as e: logger.warning( - "Failed to serialize tool_calls for " "conversation %s: %s", + "Failed to serialize tool_calls for conversation %s: %s", conversation_id, e, )
347-352: Fix implicit string concatenation in logger warning.♻️ Suggested fix
except (TypeError, ValueError) as e: logger.warning( - "Failed to serialize tool_results for " "conversation %s: %s", + "Failed to serialize tool_results for conversation %s: %s", conversation_id, e, )
197-284: Consider refactoringgetmethod to reduce local variables.The pylint error indicates too many local variables (19/15). Consider extracting the deserialization logic into helper methods to improve readability and satisfy the linter.
♻️ Suggested approach
Extract the JSON deserialization into a private helper method:
def _deserialize_json_field( self, json_str: str | None, model_class: type, field_name: str, conversation_id: str, ) -> list | None: """Deserialize a JSON string field into a list of Pydantic models.""" if not json_str: return None try: data = json.loads(json_str) return [model_class.model_validate(item) for item in data] except (json.JSONDecodeError, ValueError) as e: logger.warning( "Failed to deserialize %s for conversation %s: %s", field_name, conversation_id, e, ) return NoneThen use it in
get:docs_obj = self._deserialize_json_field( conversation_entry[6], ReferencedDocument, "referenced_documents", conversation_id ) tool_calls_obj = self._deserialize_json_field( conversation_entry[7], ToolCallSummary, "tool_calls", conversation_id ) tool_results_obj = self._deserialize_json_field( conversation_entry[8], ToolResultSummary, "tool_results", conversation_id )src/cache/postgres_cache.py (4)
12-14: Fix ungrouped imports to resolve linter warning.Same as SQLite cache - group utils imports together.
♻️ Suggested fix
from models.responses import ConversationData, ReferencedDocument -from utils.types import ToolCallSummary, ToolResultSummary from log import get_logger from utils.connection_decorator import connection +from utils.types import ToolCallSummary, ToolResultSummary
364-369: Fix implicit string concatenation in logger warning.♻️ Suggested fix
except (TypeError, ValueError) as e: logger.warning( - "Failed to serialize tool_calls for " "conversation %s: %s", + "Failed to serialize tool_calls for conversation %s: %s", conversation_id, e, )
378-383: Fix implicit string concatenation in logger warning.♻️ Suggested fix
except (TypeError, ValueError) as e: logger.warning( - "Failed to serialize tool_results for " "conversation %s: %s", + "Failed to serialize tool_results for conversation %s: %s", conversation_id, e, )
230-317: Consider refactoringgetmethod to reduce local variables.Similar to the SQLite cache, the
getmethod has too many local variables (16/15). Consider extracting deserialization logic into a helper method to satisfy the linter and reduce code duplication between the two cache implementations.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
src/app/endpoints/query.pysrc/cache/postgres_cache.pysrc/cache/sqlite_cache.pysrc/models/cache_entry.pysrc/utils/endpoints.pytests/unit/cache/test_postgres_cache.pytests/unit/cache/test_sqlite_cache.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Use FastAPI dependencies:from fastapi import APIRouter, HTTPException, Request, status, Depends
Use Llama Stack imports:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = logging.getLogger(__name__)pattern for module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Complete type annotations for function parameters and return types, usingtyping_extensions.Selffor model validators
Use union types with modern syntax:str | intinstead ofUnion[str, int]
UseOptional[Type]for optional parameters
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Uselogger.debug()for detailed diagnostic information
Uselogger.info()for general information about program execution
Uselogger.warning()for unexpected events or potential problems
Uselogger.error()for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes:Configuration,Error/Exception,Resolver,Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Complete type annotations for all class attributes
Follow Google Python docstring conventions (https://google.github.io/styleguide/pyguide.html) with sections: Args, Returns, Raises, Attributes
Runuv run make formatto auto-format code with black and ruff before completion
Runuv run make verifyto run all linters (black, pyl...
Files:
tests/unit/cache/test_sqlite_cache.pysrc/cache/sqlite_cache.pysrc/cache/postgres_cache.pysrc/utils/endpoints.pysrc/models/cache_entry.pysrc/app/endpoints/query.pytests/unit/cache/test_postgres_cache.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Use pytest for all unit and integration tests, not unittest
Usepytest-mockfor AsyncMock objects in tests
UseMOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token")pattern for authentication mocks in tests
Files:
tests/unit/cache/test_sqlite_cache.pytests/unit/cache/test_postgres_cache.py
tests/unit/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/unit/**/*.py: Unit tests require 60% code coverage
Write unit tests covering new functionality before completion
Files:
tests/unit/cache/test_sqlite_cache.pytests/unit/cache/test_postgres_cache.py
src/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
src/models/**/*.py: Use@field_validatorand@model_validatorfor custom validation in Pydantic models
ExtendBaseModelfor data Pydantic models
Use@model_validatorand@field_validatorfor Pydantic model validation
Files:
src/models/cache_entry.py
src/app/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoint error handling
Files:
src/app/endpoints/query.py
🧠 Learnings (1)
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/cache_entry.py
🧬 Code graph analysis (5)
tests/unit/cache/test_sqlite_cache.py (4)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)src/models/cache_entry.py (1)
CacheEntry(9-30)src/models/responses.py (2)
model(1736-1749)ReferencedDocument(328-342)src/cache/sqlite_cache.py (2)
insert_or_append(287-379)get(197-284)
src/cache/sqlite_cache.py (1)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)
src/cache/postgres_cache.py (1)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)
src/models/cache_entry.py (1)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)
tests/unit/cache/test_postgres_cache.py (2)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)src/models/cache_entry.py (1)
CacheEntry(9-30)
🪛 GitHub Actions: Python linter
src/cache/sqlite_cache.py
[error] 197-197: Too many local variables (19/15) (too-many-locals) detected by pylint.
[warning] 250-250: Implicit string concatenation found in call (implicit-str-concat).
[warning] 266-266: Implicit string concatenation found in call (implicit-str-concat).
[warning] 335-335: Implicit string concatenation found in call (implicit-str-concat).
[warning] 349-349: Implicit string concatenation found in call (implicit-str-concat).
[warning] 15-15: Imports from package utils are not grouped (ungrouped-imports).
src/cache/postgres_cache.py
[error] 230-230: Too many local variables (16/15) (too-many-locals) detected by pylint.
[warning] 366-366: Implicit string concatenation found in call (implicit-str-concat).
[warning] 380-380: Implicit string concatenation found in call (implicit-str-concat).
[warning] 14-14: Imports from package utils are not grouped (ungrouped-imports).
🪛 GitHub Actions: Unit tests
src/utils/endpoints.py
[error] 801-801: pydantic ValidationError: tool_results must be a list. Received a Mock object in CacheEntry; ensure tool_results is provided as a list (e.g., [] or a list of results).
[error] 801-801: CacheEntry creation failed during tests: tool_results expects a list but a non-list value (Mock) was provided.
[error] 1-1: Command failed with exit code 1. uv run pytest tests/unit --cov=src --cov=runner --cov-report term-missing
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: library mode / ci
- GitHub Check: E2E: server mode / ci
🔇 Additional comments (8)
src/models/cache_entry.py (1)
6-30: LGTM!The new
tool_callsandtool_resultsfields are correctly defined as optional lists with proper type annotations. The implementation follows the existing pattern established byreferenced_documents.tests/unit/cache/test_sqlite_cache.py (2)
478-525: LGTM!Comprehensive test coverage for the new
tool_callsandtool_resultsfields. The test properly validates:
- Storage of ToolCallSummary and ToolResultSummary objects
- Correct round-trip serialization/deserialization through SQLite
- Field values are preserved after retrieval
528-573: LGTM!Good coverage of the combined case where all optional fields (
referenced_documents,tool_calls,tool_results) are present. This ensures the serialization logic handles multiple JSON fields correctly.src/app/endpoints/query.py (2)
385-387: LGTM!Consistent handling of
tool_callsandtool_resultsin the CacheEntry construction, following the same pattern used elsewhere in the codebase.
419-420: LGTM!The QueryResponse now correctly exposes
tool_callsandtool_resultsto API consumers, enabling clients to access tool usage metadata from query responses.tests/unit/cache/test_postgres_cache.py (2)
726-839: LGTM!Comprehensive test coverage for PostgreSQL cache handling of
tool_callsandtool_results. The test validates:
- Correct JSON serialization on insert
- Proper parameter positioning in SQL statements
- Correct deserialization and model validation on retrieval
- Round-trip consistency of all field values
601-606: LGTM!Index updates correctly reflect the new column order with
tool_callsandtool_resultsat the end. The comment clarifies the parameter positioning.src/utils/endpoints.py (1)
809-811: No issues found. The code logic is correct, and tests properly instantiateTurnSummarywith real list objects rather than Mock objects. The truthy check correctly handles both empty and non-empty lists, converting empty lists toNoneas expected by theCacheEntryoptional fields.Likely an incorrect or invalid review comment.
Signed-off-by: Stephanie <yangcao@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/cache/postgres_cache.py (1)
1-14: Run Black formatter before merging.The pipeline indicates Black would reformat this file. Per coding guidelines, run
uv run make formatto auto-format the code with Black and Ruff before completion.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/cache/postgres_cache.pysrc/cache/sqlite_cache.py
🚧 Files skipped from review as they are similar to previous changes (1)
- src/cache/sqlite_cache.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Use FastAPI dependencies:from fastapi import APIRouter, HTTPException, Request, status, Depends
Use Llama Stack imports:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = logging.getLogger(__name__)pattern for module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Complete type annotations for function parameters and return types, usingtyping_extensions.Selffor model validators
Use union types with modern syntax:str | intinstead ofUnion[str, int]
UseOptional[Type]for optional parameters
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Uselogger.debug()for detailed diagnostic information
Uselogger.info()for general information about program execution
Uselogger.warning()for unexpected events or potential problems
Uselogger.error()for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes:Configuration,Error/Exception,Resolver,Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Complete type annotations for all class attributes
Follow Google Python docstring conventions (https://google.github.io/styleguide/pyguide.html) with sections: Args, Returns, Raises, Attributes
Runuv run make formatto auto-format code with black and ruff before completion
Runuv run make verifyto run all linters (black, pyl...
Files:
src/cache/postgres_cache.py
🧬 Code graph analysis (1)
src/cache/postgres_cache.py (3)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)src/cache/sqlite_cache.py (1)
get(197-286)src/cache/cache.py (1)
get(47-59)
🪛 GitHub Actions: Black
src/cache/postgres_cache.py
[error] 1-1: Black would reformat this file. Run 'uv tool run black --check .' to see details.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: E2E: server mode / ci
- GitHub Check: E2E: library mode / ci
🔇 Additional comments (6)
src/cache/postgres_cache.py (6)
13-14: LGTM!Imports are correctly added for the new
ToolCallSummaryandToolResultSummarytypes needed for the tool metadata serialization/deserialization.
36-38: LGTM!Schema documentation and table definition are consistent. Using
jsonbfor tool metadata is appropriate for PostgreSQL as it provides efficient JSON storage with indexing capabilities.Also applies to: 61-62
82-95: LGTM!SQL statements are correctly updated. The SELECT column order (indices 7 and 8 for tool_calls and tool_results) aligns with the deserialization logic in
get(), and the INSERT placeholder count matches the parameter list.
270-302: LGTM!Deserialization logic correctly handles PostgreSQL's automatic jsonb parsing. Unlike the SQLite implementation that requires
json.loads(), psycopg2 deserializes jsonb to Python objects directly. CatchingValueErrorandTypeErrorformodel_validatefailures is appropriate, and the warning-based graceful degradation maintains consistency with the referenced_documents pattern.
304-315: LGTM!CacheEntry construction properly includes the new
tool_callsandtool_resultsfields, maintaining consistency with the SQLite implementation and the expanded model definition.
357-402: LGTM!Serialization logic correctly converts Pydantic models to JSON strings for PostgreSQL jsonb storage. The pattern matches the existing
referenced_documentshandling, with appropriate error handling that logs warnings while allowing the insert to proceed with null values for the affected fields.
Signed-off-by: Stephanie <yangcao@redhat.com>
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/cache/postgres_cache.py (1)
49-65: Database migration required for existing PostgreSQL databases with new tool_calls and tool_results columns.The
CREATE TABLE IF NOT EXISTSstatement will not addtool_callsandtool_resultscolumns to existing cache tables. Production databases must execute an explicit migration (e.g.,ALTER TABLE cache ADD COLUMN tool_calls jsonb;andALTER TABLE cache ADD COLUMN tool_results jsonb;) to support the new columns referenced in SELECT (lines 84) and INSERT (lines 92-94) statements.src/cache/sqlite_cache.py (1)
48-64: Add database migration for existing SQLite deployments to support newtool_callsandtool_resultscolumns.The
CREATE_CACHE_TABLEstatement usesCREATE TABLE IF NOT EXISTS, which skips table creation if the table already exists. Existing SQLite databases will retain their old schema without the newly addedtool_callsandtool_resultscolumns. SubsequentSELECTandINSERToperations against these columns will fail at runtime.Implement a migration strategy such as:
ALTER TABLE cache ADD COLUMN tool_calls text;ALTER TABLE cache ADD COLUMN tool_results text;Alternatively, use a database schema version check (via
PRAGMA user_version) ininitialize_cache()to detect old schemas and apply necessary alterations, ensuring backward compatibility with existing deployments.
🧹 Nitpick comments (1)
tests/unit/utils/test_endpoints.py (1)
1126-1128: LGTM!Consistent update with the first test case.
Consider adding test cases that verify non-empty
tool_callsandtool_resultsare correctly propagated to theCacheEntryincleanup_after_streaming. This would provide coverage for the new functionality beyond just ensuring the mocks don't break existing tests.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
src/app/endpoints/query.pysrc/cache/postgres_cache.pysrc/cache/sqlite_cache.pysrc/models/cache_entry.pysrc/utils/endpoints.pytests/unit/cache/test_postgres_cache.pytests/unit/cache/test_sqlite_cache.pytests/unit/utils/test_endpoints.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Use FastAPI dependencies:from fastapi import APIRouter, HTTPException, Request, status, Depends
Use Llama Stack imports:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = logging.getLogger(__name__)pattern for module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Complete type annotations for function parameters and return types, usingtyping_extensions.Selffor model validators
Use union types with modern syntax:str | intinstead ofUnion[str, int]
UseOptional[Type]for optional parameters
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Uselogger.debug()for detailed diagnostic information
Uselogger.info()for general information about program execution
Uselogger.warning()for unexpected events or potential problems
Uselogger.error()for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes:Configuration,Error/Exception,Resolver,Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Complete type annotations for all class attributes
Follow Google Python docstring conventions (https://google.github.io/styleguide/pyguide.html) with sections: Args, Returns, Raises, Attributes
Runuv run make formatto auto-format code with black and ruff before completion
Runuv run make verifyto run all linters (black, pyl...
Files:
tests/unit/utils/test_endpoints.pysrc/cache/postgres_cache.pysrc/models/cache_entry.pysrc/cache/sqlite_cache.pytests/unit/cache/test_postgres_cache.pysrc/utils/endpoints.pysrc/app/endpoints/query.pytests/unit/cache/test_sqlite_cache.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Use pytest for all unit and integration tests, not unittest
Usepytest-mockfor AsyncMock objects in tests
UseMOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token")pattern for authentication mocks in tests
Files:
tests/unit/utils/test_endpoints.pytests/unit/cache/test_postgres_cache.pytests/unit/cache/test_sqlite_cache.py
tests/unit/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/unit/**/*.py: Unit tests require 60% code coverage
Write unit tests covering new functionality before completion
Files:
tests/unit/utils/test_endpoints.pytests/unit/cache/test_postgres_cache.pytests/unit/cache/test_sqlite_cache.py
src/models/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
src/models/**/*.py: Use@field_validatorand@model_validatorfor custom validation in Pydantic models
ExtendBaseModelfor data Pydantic models
Use@model_validatorand@field_validatorfor Pydantic model validation
Files:
src/models/cache_entry.py
src/app/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoint error handling
Files:
src/app/endpoints/query.py
🧠 Learnings (1)
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/cache_entry.py
🧬 Code graph analysis (3)
src/models/cache_entry.py (1)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)
tests/unit/cache/test_postgres_cache.py (3)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)src/cache/postgres_cache.py (1)
PostgresCache(19-522)src/models/cache_entry.py (1)
CacheEntry(9-30)
tests/unit/cache/test_sqlite_cache.py (4)
src/utils/types.py (2)
ToolCallSummary(111-119)ToolResultSummary(122-133)src/models/cache_entry.py (1)
CacheEntry(9-30)src/models/responses.py (2)
model(1736-1749)ReferencedDocument(328-342)src/cache/sqlite_cache.py (2)
insert_or_append(287-379)get(197-284)
🔇 Additional comments (17)
src/models/cache_entry.py (1)
1-30: LGTM!The new
tool_callsandtool_resultsfields are properly added with correct types, appropriateOptionaltyping withNonedefaults for backward compatibility, and the docstring is updated to document the new attributes.src/utils/endpoints.py (1)
809-810: LGTM!The new fields are correctly populated from the summary object, following the same pattern used for
referenced_documentson line 808. This ensures consistency in how optional list fields are handled (converting empty lists toNone).src/cache/sqlite_cache.py (2)
196-284: LGTM on deserialization logic.The deserialization of
tool_calls(index 7) andtool_results(index 8) follows the established pattern forreferenced_documents. Error handling with warning logs is appropriate for graceful degradation when encountering malformed data.
326-368: LGTM on serialization logic.The serialization follows the same pattern as
referenced_documents, usingmodel_dump(mode="json")for Pydantic models and proper error handling with warning logs.src/cache/postgres_cache.py (2)
270-314: LGTM on deserialization logic.The deserialization correctly relies on psycopg2's native jsonb handling (no
json.loadsneeded) and follows the established pattern forreferenced_documents. The error handling with warning logs ensures graceful degradation.
357-401: LGTM on serialization logic.The serialization correctly uses
json.dumpsfor inserting jsonb data and follows the same error handling pattern asreferenced_documents.tests/unit/utils/test_endpoints.py (1)
1072-1074: LGTM!The mock summary is correctly updated to include both
tool_callsandtool_resultsattributes, matching the expectedTurnSummarystructure.tests/unit/cache/test_sqlite_cache.py (4)
16-16: LGTM!The import correctly uses absolute imports as per coding guidelines, importing the new
ToolCallSummaryandToolResultSummarytypes needed for the new test cases.
474-476: LGTM!Good addition to verify that the new optional fields default to
Nonewhen not explicitly provided, ensuring backward compatibility with existing cache entries.
478-525: LGTM!Comprehensive test that validates the full serialization/deserialization round-trip for
tool_callsandtool_results. The test properly constructs model instances, persists them, retrieves them, and verifies both equality and individual field values.
528-573: LGTM!Excellent integration test that verifies all optional fields (
referenced_documents,tool_calls,tool_results) can coexist in a singleCacheEntryand survive the serialization/deserialization round-trip. This provides good confidence in the schema integrity.src/app/endpoints/query.py (2)
385-387: LGTM!The
CacheEntryconstruction correctly populatestool_callsandtool_resultsfrom the summary, using conditional expressions to normalize empty lists toNonefor consistency with theOptionalfield types.
416-426: LGTM!The
QueryResponseconstruction correctly includestool_callsandtool_results, maintaining consistency with theCacheEntryconstruction pattern. This ensures tool usage metadata is available to API consumers.tests/unit/cache/test_postgres_cache.py (4)
17-17: LGTM!Correct absolute import of
ToolCallSummaryandToolResultSummarytypes needed for the new test cases.
601-628: LGTM!Good updates to account for the new schema columns. The comment clarifying the index position (
-3forreferenced_documents) improves maintainability, and the mock return value correctly includesNonefor the newtool_callsandtool_resultscolumns.
655-682: LGTM!Thorough updates that validate all optional fields (
referenced_documents,tool_calls,tool_results) are properly handled asNonewhen not provided. The inline comments improve code clarity.
726-839: LGTM!Comprehensive test that validates the full round-trip for
tool_callsandtool_resultsin the PostgreSQL cache. The test properly:
- Constructs model instances with all required fields
- Verifies JSON serialization structure matches expected format
- Mocks the database return value correctly
- Validates deserialization produces equivalent objects
- Follows established mocking patterns in the test suite
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you!
Description
Include tool call and tool results in cache entry, which stored & being retrieved for
/v2/conversationsendpoint.Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
New Features
Improvements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.