Skip to content

⚡ Bolt: Add LRU caching to AI recommendation engine#23

Open
LVT-ENG wants to merge 1 commit intomainfrom
bolt/add-ai-cache-6026025191079107177
Open

⚡ Bolt: Add LRU caching to AI recommendation engine#23
LVT-ENG wants to merge 1 commit intomainfrom
bolt/add-ai-cache-6026025191079107177

Conversation

@LVT-ENG
Copy link
Copy Markdown
Member

@LVT-ENG LVT-ENG commented Mar 22, 2026

Implemented a performance optimization in the AI recommendation engine by adding LRU caching for expensive LLM API calls. This change significantly reduces response times for common garment and event type combinations. The solution includes updated metadata for garments, a refactored engine using hashable cache keys, and comprehensive tests to ensure reliability and correct authentication. Verified with a custom benchmark script demonstrating massive speed improvements.


PR created automatically by Jules for task 6026025191079107177 started by @LVT-ENG

Implement `functools.lru_cache` in the Jules AI engine to significantly reduce latency and eliminate redundant LLM API calls for identical garment/event combinations.

What:
- Added `drape` and `elasticity` metadata to `SHOPIFY_INVENTORY` in `backend/models.py`.
- Refactored `get_jules_advice` in `backend/jules_engine.py` to use a cached internal function with primitive, hashable keys.
- Updated `backend/tests/test_main.py` with correct authentication mocking and payload validation.
- Added `backend/benchmark_cache.py` to verify performance gains.
- Updated `.gitignore` to exclude Python cache files.

Why:
- LLM API calls are the primary bottleneck, taking ~1s per request.
- Caching reduces subsequent identical requests to <0.1ms (verified by benchmark).
- Standardizes the `UserScan` payload across the backend and tests.

Impact:
- Reduces latency for repeat requests by ~99.9%.
- Lowers API costs by skipping redundant calls.
- Ensures test stability and correct API signature enforcement.

Measurement:
- Execute `export PYTHONPATH=$PYTHONPATH:$(pwd)/backend && python3 backend/benchmark_cache.py` to see the speedup.
- Run `export PYTHONPATH=$PYTHONPATH:$(pwd)/backend && python3 -m pytest backend/tests/` to verify correctness.

Co-authored-by: LVT-ENG <214667862+LVT-ENG@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 22, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
_deploy_build Ready Ready Preview, Comment Mar 22, 2026 5:54am
tryonyou-org Ready Ready Preview, Comment Mar 22, 2026 5:54am

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a crucial performance optimization to the AI recommendation engine by integrating LRU caching for expensive LLM API calls. The change aims to drastically reduce latency for frequently requested fashion advice by caching results based on primitive garment and event type attributes. This required a refactoring of the advice generation logic, an expansion of the garment data model to include fabric properties essential for caching, and the addition of a benchmark script to verify the performance gains.

Highlights

  • Performance Optimization: Implemented LRU caching for the AI recommendation engine's Large Language Model (LLM) calls to significantly reduce response times for common garment and event type combinations.
  • Code Refactoring: Refactored the get_jules_advice function in backend/jules_engine.py to separate caching logic into an internal function (_get_cached_advice) and ensure primitive, hashable types are used as cache keys.
  • Data Model Enhancement: Extended the Garment data model in backend/models.py with new drape and elasticity attributes, and updated existing garment data in SHOPIFY_INVENTORY with these properties.
  • Benchmarking and Testing: Introduced a dedicated benchmark script (backend/benchmark_cache.py) to validate the performance improvements of the caching mechanism and updated an existing test case (test_recommend_garment_engine_failure) to reflect new fallback behavior and authentication requirements.
  • Documentation and Configuration: Added Python-specific ignore patterns to .gitignore and documented the learning and action related to this caching optimization in a new .jules/bolt.md file.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant performance improvement by implementing LRU caching for the AI recommendation engine. The refactoring in jules_engine.py to use a wrapper function with primitive types for the cache key is well-executed and follows best practices. The addition of a benchmark script is also a valuable contribution for verifying the performance gains. My review has identified one critical security vulnerability concerning a hardcoded secret key, which is now referenced by the updated tests. It is crucial to address this to maintain the security of the application.

import time
from fastapi.testclient import TestClient
from backend.main import app
from backend.main import app, SECRET_KEY
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-critical critical

This change introduces a dependency on SECRET_KEY. Upon inspection of the full file (backend/main.py, line 22), the secret key is hardcoded: SECRET_KEY = "LVT_SECRET_PROD_091228222". Hardcoding secrets is a critical security vulnerability as it exposes sensitive credentials directly in the source code. This key should be managed securely by loading it from an environment variable or a secret management service, similar to how GEMINI_API_KEY is handled in the project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant