Conversation
- README.md: Main tracking document with architecture and status - technical-analysis.md: Implementation details and service design - quality-review.md: Quality gates, risk register, validation checklists - strategic-recommendation.md: GO recommendation with 4-6 week timeline - validation-results.md: Template for validation sprint results Risk profile updated based on user feedback: - CORS: CRITICAL → LOW (workaround available) - Tool Calling: HIGH → MEDIUM (model-dependent, supported) Priority models: Gemma-3-4b-it-GGUF, Llama-3.2-Instruct-GGUF Validation sprint (next phase): 1. CORS header testing 2. Tool calling with compatible models 3. LLM quality baseline (10 editing prompts) 4. STT quality (WER calculation) 5. TTS viability (deferred for MVP)
- Add Lemonade AI provider settings to settingsStore (aiProvider, lemonadeEndpoint, lemonadeModel, lemonadeUseFallback, lemonadeServerAvailable) - Update AIChatPanel with provider selector dropdown (OpenAI | Lemonade) - Add server status indicator for Lemonade (online/offline with color-coded UI) - Add model selector for Lemonade models (Qwen3-4B, Gemma-3-4B, Llama-3.2, Phi-3) - Add fast fallback toggle for Lemonade provider - Implement provider routing in sendMessage (calls lemonadeProvider or OpenAI API) - Add CSS styles for provider-select, server-status, fallback-toggle - HMR-safe singleton pattern for lemonadeProvider and lemonadeService - Automatic server health checks via lemonadeService subscription Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- README.md: Add Lemonade integration summary - docs/lemonade/README.md: Track implementation status and architecture - docs/lemonade/strategic-recommendation.md: Comprehensive analysis with validation results - docs/lemonade/validation-results.md: Detailed validation sprint results (CORS, Tool Calling, LLM, STT) Documents the Phase 1 MVP implementation with provider toggle, server health monitoring, and tool calling support for timeline editing.
- Document splitClip, deleteClip, trimClip test results - Record performance metrics: 1.1s prefill, 15 tps decoding, 75-112s total - Confirm tool calling CONDITIONAL PASS status - Update validation-results.md with live test evidence Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Integrated master changes (v1.3.9): - Security hardening (devBridgeAuth, fileAccessBroker, redact) - FOSSA compliance and license scanning - AI tool policy and approval gating - Kie.ai integration (credit-based pricing) - MediaBunny muxer migration - MatAnyone2 integration - Native Helper v0.3.9/v0.3.10 releases - Changelog improvements and UI polish - VRAM leak fixes and playback improvements Preserved lemonade-support features: - Lemonade AI provider integration - Lemonade Server status indicator - Lemonade model selection - Lemonade documentation (docs/lemonade/) Conflict resolutions: - README.md: Combined version 1.3.9 + Lemonade badge + security section - AIChatPanel.tsx: Merged Lemonade integration with approval mode policy - settingsStore.ts: Auto-merged (both changes compatible)
|
hi , thats a big one. will look into it asap. THANKS ! ;) |
No problem. I mainly included the docs/phases/audits incase you wanted to look over for verification. Otherwise we can take them out casue it makes most of the 100k PR |
|
Thanks for the PR. Conceptually, I’m not sure Lemonade adds a fundamentally new capability here. MasterSelects already exposes local AI control through the existing bridge / API layer, so a local LLM can already run Right now that already works through:
So if the goal is simply "use a local LLM to control the editor", that path already exists. Example dev-server call: curl -X POST http://127.0.0.1:5173/api/ai-tools \
⚠ Heads up, you have less than 5% of your 5h limit left. Run /status for a breakdown.
-H "Content-Type: application/json" \
-H "Authorization: Bearer <token-from-.ai-bridge-token>" \
-d '{"tool":"_status","args":{}}'
Example Native Helper call:
curl -X POST http://127.0.0.1:9877/api/ai-tools \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <startup-token>" \
-d '{"tool":"_status","args":{}}'
This also seems to already cover external-agent / MCP-style workflows. I do not currently see a dedicated built-in MCP
server here, but the control surface for external tools is already present.
So to me Lemonade only really makes sense if the goal is specifically to provide a built-in local chat/provider inside
MasterSelects itself.
If the goal is just local LLM-driven editor control, the existing external bridge/API approach already seems like the
cleaner and more flexible architecture.
what do you think ? |
|
Open to revise some work/plugins. There is NPU accel via Whisper stuff too so transcription is real fast PR Response: Lemonade Local AI (Short Version)To: Reviewer Full Disclosure + ContextI work on the Lemonade team as an AMD intern — I use Lemonade daily at home on my AMD AI PC with NPU acceleration. That's why I built this integration. You're Right — Here's the Real StoryYour point is valid: The bridge already enables "local LLM controls editor."
Yes — exactly! That's the entire goal. This PR adds Lemonade as a built-in provider option (like OpenAI), not as a replacement for the bridge. Your Implicit Point: "Why Not Just Document the Bridge?"You're suggesting users could just use Lemonade via the bridge today — and you're right, they could.
Tradeoff:
We already have built-in OpenAI. This just adds local AI to the same dropdown. What I should have been clearer about: Lemonade vs Ollama — Technical Reality
Key point: Any GGUF model that works with Ollama works with Lemonade — same llama.cpp backend. What This PR Actually DoesAdds Lemonade as a built-in option (2 files, optional):
Does NOT remove: Bridge architecture, OpenAI provider, or external agent support. Honest Comparison: All Local AI Paths
My Proposal
TL;DR: You're right that bridge handles "local AI controls editor." Both Ollama and Lemonade work (same llama.cpp backend, GGUF models). Lemonade adds built-in UI + auto AMD NPU config. I use it daily on my AMD AI PC. Proposing we document both and let users choose. Thanks for the thoughtful review — this clarity is valuable regardless of what we decide. |
- AIFeaturesSettings: Add Lemonade configuration section with provider toggle, model selector, server status indicator, and test connection button - ApiKeysSettings: Add Lemonade endpoint configuration with setup links - AIChatPanel: Already has provider toggle UI, now fully functional - settingsStore: Export APIKeys type, add lemonade state persistence - Mount guard pattern prevents state updates on unmounted components - useCallback handlers prevent stale closures - Loading state for async operations Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- integration-analysis.md: Detailed analysis of UI integration gaps - INTEGRATION-SUMMARY.md: Complete integration guide with test checklist Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…onade tool calling - Detect model tool calling support using heuristics (Qwen/Gemma/Llama-3.2-3B+) - Skip sending tools to models that don't support them - Show 'NO CHOICES IN RESPONSE' error with helpful message suggesting alternative models - Add visual indicators in model selector showing tool support (✓/✗) - Add warning banner when Editor Mode enabled but model doesn't support tools - Log full response for debugging when choices is empty Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Create lemonadeWhisperService.ts for server-side transcription - Add lemonadeTranscriptionEnabled and lemonadeTranscriptionFallback settings - Update whisperService.ts to route to Lemonade when enabled and available - Update TranscriptionSettings UI with Lemonade option and server status - Auto-fallback to browser transcription when server offline - Update clipTranscriber.ts to handle lemonade provider (no API key needed) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Document issues fixed (NO CHOICES IN RESPONSE error) - Document Whisper.cpp transcription backend feature - Provide testing checklist and troubleshooting guide - Include architecture diagrams and API endpoints Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove PR-*.md temporary files - Remove QUICK-START.md, SECURITY-CHECKLIST.md, WHAT-CHANGED.md - Remove intermediate analysis docs from docs/lemonade/ - Keep only essential docs: SPRINT-SUMMARY.md, validation-results.md, README.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
good point. will implenetn next week! |
Fire! Let me know. I plan on making a lil YouTube video showing it working later this week. |
I need to update my agent workflow's work but wanted to show you what I have before you update again.
I do need to clean up the PR a bit remove a lot of stuff.
I do need to do some more manual testing.
Let me know your thoughts @Sportinger
Add Lemonade AI Provider (Optional Local AI)
Summary
Adds Lemonade Server as an optional local AI provider alongside OpenAI. Users can choose between:
What This Adds
New Features (Optional)
Files Added
src/services/lemonadeProvider.tssrc/services/lemonadeService.tsFiles Modified
src/components/panels/AIChatPanel.tsxsrc/stores/settingsStore.tsREADME.mdMaster Branch Updates Included
This PR also brings in ~40 commits from master that
lemonade-supportwas missing. These updates are already in master — this PR just ensures we're current.Included Updates Summary
devBridgeAuth,fileAccessBroker— protects AI tool access from unauthorized file operationsIf You Want Minimal PR
We can split this into smaller PRs to make review easier:
Option A: Keep Together (Recommended)
Keep everything in this PR because:
Option B: Split Into 3 PRs
PR #1: Security + Infrastructure (required foundation)
PR #2: Lemonade AI Provider (optional enhancement)
PR #3: Optional AI Features (can defer indefinitely)
Testing
npm run build)Usage
For Users Who Want Lemonade
For Users Who Don't Want Lemonade
Nothing changes. The app works exactly as before:
Future Updates
We can refine later:
Screenshots
AIChatPanel with Provider Toggle
Settings → AI Features
Related Issues
Checklist
Reviewers: This PR brings Lemonade as an optional local AI provider. The main consideration is whether to include Kie.ai and MatAnyone2 in this PR or move them to separate branches. Recommend keeping them (they're already in master) but marking as optional features.