Skip to content

⚡ Bolt: Backend and Frontend Lifecycle Optimizations#35

Open
LVT-ENG wants to merge 1 commit intomainfrom
bolt-performance-optimizations-17780262876767202699
Open

⚡ Bolt: Backend and Frontend Lifecycle Optimizations#35
LVT-ENG wants to merge 1 commit intomainfrom
bolt-performance-optimizations-17780262876767202699

Conversation

@LVT-ENG
Copy link
Copy Markdown
Member

@LVT-ENG LVT-ENG commented Mar 26, 2026

💡 What

Implemented backend concurrency optimizations (sync 'def' for blocking calls) and frontend lifecycle optimizations (DOM caching, unobserve).

🎯 Why

The blocking LLM call in the AI engine was stalling the FastAPI event loop, and repeated DOM lookups/unnecessary observers were impacting frontend performance.

📊 Impact

  • Prevents event loop stalling, allowing the backend to handle concurrent requests more efficiently.
  • Saves CPU cycles by pre-encoding auth keys.
  • Reduces DOM lookup overhead and observer callbacks on the frontend.

🔬 Measurement

Verified via pytest (backend logic) and vitest (frontend logic). Visual verification confirmed UI stability.


PR created automatically by Jules for task 17780262876767202699 started by @LVT-ENG

This PR implements key performance improvements across the stack:
- Backend: Converted the recommendation endpoint to synchronous 'def' to utilize FastAPI's thread pool, preventing blocking LLM calls from stalling the event loop.
- Backend: Pre-encoded HMAC secret keys to bytes during initialization to save CPU cycles on every authentication check.
- Frontend: Implemented DOM element caching in the TryOnYouBunker constructor for frequently accessed nodes.
- Frontend: Optimized the IntersectionObserver to unobserve targets immediately after their initial transition animation.

Impact: Reduces event loop contention on the backend and improves UI responsiveness/memory usage on the frontend.

Co-authored-by: LVT-ENG <214667862+LVT-ENG@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 26, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
_deploy_build Ready Ready Preview, Comment Mar 26, 2026 6:01am
tryonyou-org Ready Ready Preview, Comment Mar 26, 2026 6:01am

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances both backend and frontend performance by implementing targeted optimizations. On the backend, it addresses event loop stalling caused by blocking LLM calls and reduces CPU usage during authentication. On the frontend, it minimizes DOM lookup overhead and streamlines observer callbacks, resulting in a more responsive and efficient user interface.

Highlights

  • Backend Concurrency Optimization: The recommend_garment endpoint in the FastAPI backend was converted from an asynchronous (async def) to a synchronous (def) function. This change allows FastAPI to run the blocking LLM call in a separate thread pool, preventing it from stalling the main event loop and improving the backend's ability to handle concurrent requests.
  • Authentication Key Pre-encoding: The secret keys used for authentication verification in both backend/DivineoBunker.py and backend/main.py are now pre-encoded to bytes during initialization. This optimization saves CPU cycles by avoiding repeated encoding operations during high-frequency authentication checks.
  • Frontend DOM Caching: Frequently accessed DOM elements in js/main.js are now cached within a this.ui object in the TryOnYouBunker class. This reduces the overhead of repeated document.getElementById and document.querySelectorAll calls, leading to improved frontend performance.
  • Optimized IntersectionObserver Usage: The animateOnScroll function in js/main.js now includes an observer.unobserve(entry.target) call. This stops observing elements once their scroll-triggered animation has been applied, reducing unnecessary observer callbacks and improving frontend efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several performance optimizations across both the backend Python services and the frontend JavaScript application. In the backend, the secret key used for HMAC authentication is now pre-encoded into bytes to avoid repeated encoding during high-frequency verification calls. Additionally, a FastAPI endpoint was converted from asynchronous to synchronous to allow it to run in a thread pool, preventing blocking LLM calls from stalling the event loop. On the frontend, frequently accessed DOM elements are now cached in a this.ui object to reduce repeated DOM queries, and an optimization was added to the IntersectionObserver to stop observing elements once their transition is triggered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant