feat: Add AI helpers to Javascript and Queries#41590
feat: Add AI helpers to Javascript and Queries#41590
Conversation
…r or otherwise) and address
…n testing - Add AI Settings page at /settings/ai with provider selection (Claude, OpenAI, Local LLM) - Add LOCAL_LLM enum to AIProvider - Add localLlmUrl and localLlmContextSize fields to OrganizationConfiguration - Add Test Connection button for Local LLM that validates: - URL parsing and format - DNS resolution with resolved IP display - TCP connection to host:port - TLS handshake (for HTTPS) - HTTP response and endpoint validation - Checks if response looks like an LLM API (JSON with expected fields) - Shows actual response preview from the server - Add Test Key button for Claude and OpenAI that: - Sends a real test request to verify API key works - Shows step-by-step diagnostics - Displays the AI response on success - Shows detailed error info and suggestions on failure - Fix GPT component to use styled textarea instead of missing Textarea export - Fix response interceptor handling in AI Settings page Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ling fixes - Add AISidePanel component with quick actions (Explain, Fix Errors, Refactor, Add Comments) - Add AIEditorLayout for side-by-side editor + AI panel integration - Fix response extraction in sagas to handle both axios wrapped and interceptor unwrapped formats - Add context detection: JS mode uses AST to find current function, SQL/GraphQL uses cursor window - Fix icon names to use valid Appsmith design system icons - Enable AI for JavaScript, SQL, and GraphQL editor modes in DynamicTextField Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Removed import of getJSFunctionLocationFromCursor from pages/Editor/JSEditor/utils which was creating a cyclic dependency chain. Now using a simple window-based approach for JavaScript context (same as SQL/GraphQL) - 15 lines before/after cursor. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When the AI response contains code blocks without a language specifier, use the current editor mode (SQL, GraphQL, etc.) instead of always defaulting to JavaScript. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added CLEAR_AI_RESPONSE action to reset lastResponse and error when the editor mode changes. This prevents AI responses from one editor (e.g., JS) from persisting when switching to another editor (e.g., SQL). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Change Redux state from lastResponse to messages array for multi-turn chat - Pass conversation history to Claude/OpenAI APIs for context-aware responses - Add chat-style UI with message bubbles and auto-scroll - Add clear chat button and green toggle for enabled state - Create AIMessageDTO for backend conversation history support - Simplify EE files to re-export from CE where possible Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Rename "Appsmith AI Beta" to "Ask AI" in slash command menu - Remove beta flag from Ask AI command - Add Redux state and actions for AI panel open/close - Wire slash command to dispatch OPEN_AI_PANEL action - CodeEditor syncs Redux state to open panel when triggered Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Increase the number of lines sent to AI for context: - JavaScript: 15 -> 50 lines before/after cursor - SQL: 10 -> 40 lines before/after cursor - GraphQL: 10 -> 40 lines before/after cursor (EE only) - JSON: 10 -> 40 lines before/after cursor (EE only) This helps the AI better understand larger code structures when providing assistance. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create AIReferenceService to load mode-specific reference documentation - Add reference files for JavaScript, SQL, GraphQL, and common issues - Implement three-tier fallback: external path -> bundled -> inline - Update AIAssistantServiceCEImpl to use dynamic prompts - Increase max_tokens from 4096 to 8192 for longer responses - Increase response truncation from 100K to 200K chars - Add appsmith.ai.references.path configuration property The reference files contain Appsmith-specific patterns, best practices, and common issues to help the AI provide more accurate, context-aware responses. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Documents how users can customize AI reference files: - Docker volume mount - Docker Compose - Kubernetes ConfigMap - Environment variable for custom path - File format guidelines - Fallback behavior explanation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Replace per-editor AI panels with single global side panel - Add GlobalAISidePanel component with scrollable responses and resizable input - Update CodeEditor to dispatch openAIPanelWithContext action - Add editor context tracking (mode, entity, cursor position) - Auto-close panel on route navigation - Fix AI selector state path and add missing reducer properties - Add quick actions (Explain, Fix Errors, Refactor, Add Comments) - Support conversation history display with code block rendering Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add getReferenceFilesInfo() method to AIReferenceService to detect whether external files are being used instead of bundled defaults - Show "Custom AI Context Files Active" notice on AI Configuration page when external reference files are detected - Refactor AI settings page with reusable TestResultDisplay and ApiKeyTestResult components, reducing code duplication Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…rotection - Add model dropdown that auto-fetches available models after successful connection test - Add context size preset buttons (4K, 8K, 16K, 32K, 128K) with custom input option - Add POST /ai-config/fetch-models endpoint to query Ollama's /api/tags - Add localLlmModel field to AIConfigDTO and OrganizationConfigurationCE - Fix SSRF vulnerabilities by using WebClientUtils with IP filtering - Block requests to internal IPs and cloud metadata endpoints (169.254.169.254) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…sage visibility - Clear AI messages when switching between editor contexts (JS to Query) - Fix user message bubble contrast by using subtle background with border - Add security audit document to gitignore Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Summary of fixes: - Fix Ollama connection: properly construct /api/chat URL when admin provides a base URL (e.g. http://localhost:11434) instead of the full endpoint path - Fix request timeouts: increase Axios timeout from 20s to 180s for AI requests, and increase nginx proxy_read_timeout to 180s, since LLM model loading (cold start) can take 60-90+ seconds - Add Microsoft Copilot (Azure OpenAI) support: new copilotEndpoint field in AI config, dedicated callCopilotAPI method, and admin UI for configuring the Azure OpenAI endpoint URL - Bypass SSRF protection for admin-configured LLM endpoints: create custom WebClient instances for LOCAL_LLM and COPILOT providers to avoid blocking localhost/private network requests - Improve error messages: surface actual error details instead of generic "Failed to get AI response", with specific messages for timeout, connection refused, model not found, and auth errors - Add "Clear Chat" button to Quick Actions in all AI panels (CE and EE AISidePanel, GlobalAISidePanel) so users can easily clear the conversation history - Fix Ollama test connection: use GET /api/tags instead of POST to avoid 404 when testing connection without a specific model Co-authored-by: Cursor <cursoragent@cursor.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolve conflicts by keeping both AI assistant and favorites features in sagas, controllers, and service layers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the "MS Copilot" AI provider with "Azure OpenAI" across the full stack. Users now provide endpoint, deployment name, and API key — the system constructs the Azure OpenAI URL internally. Existing COPILOT configurations are migrated at read time with no DB migration needed. Backend: add AZURE_OPENAI enum, domain fields, real API test endpoint, and URL construction in AI service. Frontend: new 3-field config form, updated provider dropdown, save/load/test logic. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…OpenAI Azure OpenAI newer models (GPT-5.2, o1, o3) require max_completion_tokens instead of max_tokens and a specific api-version query parameter. Both were previously hardcoded. This adds them as configurable fields in the AI admin settings UI with sensible defaults (api-version: 2024-12-01-preview, max_completion_tokens: 16384). Also removes hardcoded temperature from Azure requests (unsupported by reasoning models) and adds error body logging for Azure API failures. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds an AI assistant feature: frontend UI and Redux flows, sagas calling new APIs, backend controllers/services for multi-provider LLM integration, DTOs and domain fields, migration to add org flag, AI reference resources and admin settings, plus extensive documentation and security/audit artifacts. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client as Frontend UI
participant Redux as Redux State
participant Saga as AI Saga
participant API as Client API
participant Server as Backend Controller/Service
participant Provider as LLM Provider
User->>Client: open panel / send prompt
Client->>Redux: dispatch FETCH_AI_RESPONSE
Redux->>Saga: saga handles FETCH_AI_RESPONSE
Saga->>API: POST /v1/users/ai-assistant/request (AIRequestDTO)
API->>Server: request forwarded to UserControllerCE.requestAIResponse
Server->>Server: validate + route to AIAssistantServiceCEImpl
Server->>Provider: provider-specific HTTP call (OpenAI/Claude/Azure/Local)
Provider-->>Server: LLM response
Server->>Server: extract/truncate/format response
Server-->>API: return response payload
API-->>Saga: response received
Saga->>Redux: dispatch FETCH_AI_RESPONSE_SUCCESS
Redux->>Client: state updates -> render response (code blocks, actions)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/22636345472. |
Disable AI in the API editor by removing JSON mode from EE isAIEnabled, remove Insert/Replace buttons from AISidePanel (copy-only workflow for v1), and delete unused AIEditorLayout files (CE and EE).
- Extract shared AI panel components (styled components, helpers, constants, animations) into ce/components/editorComponents/GPT/shared/ reducing ~1100 lines of duplication across CE AISidePanel, EE AISidePanel, and GlobalAISidePanel - Create AIConstants.java to consolidate 6 duplicated default values between AIAssistantServiceCEImpl and OrganizationControllerCE - Wrap blocking InetAddress.getByName() in Mono.fromCallable with boundedElastic scheduler to avoid blocking Netty event loop threads - Add MongoDB/NoSQL heuristics to extractReferencedTableNames for db.collection.method() patterns and JSON string value matching
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23045743891. |
|
Deploy-Preview-URL: https://ce-41590.dp.appsmith.com |
The AI Assistant was generating generic SQL queries because the datasource schema was never proactively fetched — it only read from the Redux cache which is empty until the user manually browses the schema tab. Now enrichContextWithSchema dispatches fetchDatasourceStructure and waits for the result before proceeding. The AskAIButton was visible in every CodeEditor (including API editor JSON fields) because it only checked a global Redux flag, bypassing the mode-based isAIEnabled gate. Now the button is gated by this.AIEnabled. Also removed isJSONMode from DynamicTextField's AIAssisted computation for consistency.
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23048181902. |
- Fix TS2769 in AIAssistantSagas: use inline type for take() predicates instead of ReduxAction<T> which is incompatible with redux-saga's Predicate<Action<string>> - Fix prettier violations in aiSchemaSerializer.ts: collapse multi-line RegExp constructors to single lines - Fix restricted-import lint error: create EE re-export for ce/components/editorComponents/GPT/shared and update imports in GlobalAISidePanel and EE AISidePanel to use ee/ path
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23052520196. |
|
Deploy-Preview-URL: https://ce-41590.dp.appsmith.com |
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23267426495. |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2e8d6a6 to
0b1a50f
Compare
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Import Components type from react-markdown and use Partial<Components> to properly type MARKDOWN_COMPONENTS, removing the incompatible Record<string, unknown> index signature. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23269649542. |
|
Deploy-Preview-URL: https://ce-41590.dp.appsmith.com |
Scope inline code styling (background, padding, border-radius) to only code elements not inside pre blocks, preventing the light background from bleeding into dark-themed fenced code blocks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
/build-deploy-preview skip-tests=true |
|
Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23273899909. |
|
Deploy-Preview-URL: https://ce-41590.dp.appsmith.com |
Failed server tests
|
Failed server tests
|
Description
Slack Thread
Add AI helpers to JavaScript modules and queries, with a configurable AI assistant admin panel supporting multiple providers.
Summary
Test plan
Fixes #
Issue NumberAutomation
/ok-to-test tags=""
Communication
Should the DevRel and Marketing teams inform users about this change?
Warning
Tests have not run on the HEAD 72757b4 yet
Thu, 19 Mar 2026 00:19:34 UTC
Summary by CodeRabbit
New Features
Documentation
Note
Medium Risk
Adds new organization-level AI configuration endpoints and a new AI-assistant request API that proxies calls to LLM providers, which impacts security-sensitive config handling and introduces new UI/Redux flows across editors.
Overview
Introduces an AI Assistant for JavaScript and query editors with new side-panel UIs (editor-scoped and global), quick actions, chat history, and code insertion/apply flows, backed by new Redux actions/state (
aiAssistantReducer).Adds organization-level AI configuration support: a new Admin Settings “AI Assistant” page plus client
OrganizationApimethods and server/ai-configendpoints to store provider settings/keys, test API keys, test local LLM connectivity, and fetch local model lists.Adds a server-side
POST /users/ai-assistant/requestendpoint and clientUserApi.requestAIResponse(with long timeout) so the frontend requests AI responses via the Appsmith server, along with assorted repo tooling/docs updates (Claude/Cursor rules, security audit writeups, gitignore tweaks).Written by Cursor Bugbot for commit 270472a. This will update automatically on new commits. Configure here.