Skip to content

feat: Add AI helpers to Javascript and Queries#41590

Open
salevine wants to merge 43 commits intoreleasefrom
feat/enable-ai
Open

feat: Add AI helpers to Javascript and Queries#41590
salevine wants to merge 43 commits intoreleasefrom
feat/enable-ai

Conversation

@salevine
Copy link
Contributor

@salevine salevine commented Mar 3, 2026

Description

Slack Thread

Add AI helpers to JavaScript modules and queries, with a configurable AI assistant admin panel supporting multiple providers.

Summary

  • Global AI side panel architecture with ongoing chat and conversation history
  • Support for Claude, OpenAI, Azure OpenAI, and Local LLM (Ollama) providers
  • AI admin settings page with connection testing, model selection, and context size presets
  • Azure OpenAI: configurable API version and max completion tokens fields
  • AI reference service for enhanced system prompts with external file customization
  • SSRF protection for LLM connections

Test plan

  • Select each AI provider and verify settings save/load correctly
  • Test Azure OpenAI with GPT-5.2 deployment — verify configurable API version and max tokens work
  • Test "Test Key" button for each provider
  • Test Local LLM connection with Ollama — verify model auto-detection
  • Open AI panel in JS editor and verify conversation flow
  • Verify AI panel clears context when switching editors

Fixes #Issue Number

Automation

/ok-to-test tags=""

Communication

Should the DevRel and Marketing teams inform users about this change?

  • Yes
  • No

Warning

Tests have not run on the HEAD 72757b4 yet


Thu, 19 Mar 2026 00:19:34 UTC

Summary by CodeRabbit

  • New Features

    • AI Assistant added to editors (Ask AI button, editor side panels, global AI panel) with multi-provider support and quick actions.
    • Admin AI Settings page to configure providers, keys, local LLMs, test connections, and fetch models.
    • MSSQL read-only connection support; MCP datasource plugin plan introduced.
  • Documentation

    • New AI reference guides (JavaScript, SQL, GraphQL), project guide, prompts reference, and security audit/fixes documentation.

Note

Medium Risk
Adds new organization-level AI configuration endpoints and a new AI-assistant request API that proxies calls to LLM providers, which impacts security-sensitive config handling and introduces new UI/Redux flows across editors.

Overview
Introduces an AI Assistant for JavaScript and query editors with new side-panel UIs (editor-scoped and global), quick actions, chat history, and code insertion/apply flows, backed by new Redux actions/state (aiAssistantReducer).

Adds organization-level AI configuration support: a new Admin Settings “AI Assistant” page plus client OrganizationApi methods and server /ai-config endpoints to store provider settings/keys, test API keys, test local LLM connectivity, and fetch local model lists.

Adds a server-side POST /users/ai-assistant/request endpoint and client UserApi.requestAIResponse (with long timeout) so the frontend requests AI responses via the Appsmith server, along with assorted repo tooling/docs updates (Claude/Cursor rules, security audit writeups, gitignore tweaks).

Written by Cursor Bugbot for commit 270472a. This will update automatically on new commits. Configure here.

salevine and others added 23 commits January 26, 2026 23:53
…n testing

- Add AI Settings page at /settings/ai with provider selection (Claude, OpenAI, Local LLM)
- Add LOCAL_LLM enum to AIProvider
- Add localLlmUrl and localLlmContextSize fields to OrganizationConfiguration
- Add Test Connection button for Local LLM that validates:
  - URL parsing and format
  - DNS resolution with resolved IP display
  - TCP connection to host:port
  - TLS handshake (for HTTPS)
  - HTTP response and endpoint validation
  - Checks if response looks like an LLM API (JSON with expected fields)
  - Shows actual response preview from the server
- Add Test Key button for Claude and OpenAI that:
  - Sends a real test request to verify API key works
  - Shows step-by-step diagnostics
  - Displays the AI response on success
  - Shows detailed error info and suggestions on failure
- Fix GPT component to use styled textarea instead of missing Textarea export
- Fix response interceptor handling in AI Settings page

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ling fixes

- Add AISidePanel component with quick actions (Explain, Fix Errors, Refactor, Add Comments)
- Add AIEditorLayout for side-by-side editor + AI panel integration
- Fix response extraction in sagas to handle both axios wrapped and interceptor unwrapped formats
- Add context detection: JS mode uses AST to find current function, SQL/GraphQL uses cursor window
- Fix icon names to use valid Appsmith design system icons
- Enable AI for JavaScript, SQL, and GraphQL editor modes in DynamicTextField

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Removed import of getJSFunctionLocationFromCursor from pages/Editor/JSEditor/utils
which was creating a cyclic dependency chain. Now using a simple window-based
approach for JavaScript context (same as SQL/GraphQL) - 15 lines before/after cursor.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When the AI response contains code blocks without a language specifier,
use the current editor mode (SQL, GraphQL, etc.) instead of always
defaulting to JavaScript.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added CLEAR_AI_RESPONSE action to reset lastResponse and error when
the editor mode changes. This prevents AI responses from one editor
(e.g., JS) from persisting when switching to another editor (e.g., SQL).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Change Redux state from lastResponse to messages array for multi-turn chat
- Pass conversation history to Claude/OpenAI APIs for context-aware responses
- Add chat-style UI with message bubbles and auto-scroll
- Add clear chat button and green toggle for enabled state
- Create AIMessageDTO for backend conversation history support
- Simplify EE files to re-export from CE where possible

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Rename "Appsmith AI Beta" to "Ask AI" in slash command menu
- Remove beta flag from Ask AI command
- Add Redux state and actions for AI panel open/close
- Wire slash command to dispatch OPEN_AI_PANEL action
- CodeEditor syncs Redux state to open panel when triggered

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Increase the number of lines sent to AI for context:
- JavaScript: 15 -> 50 lines before/after cursor
- SQL: 10 -> 40 lines before/after cursor
- GraphQL: 10 -> 40 lines before/after cursor (EE only)
- JSON: 10 -> 40 lines before/after cursor (EE only)

This helps the AI better understand larger code structures when
providing assistance.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create AIReferenceService to load mode-specific reference documentation
- Add reference files for JavaScript, SQL, GraphQL, and common issues
- Implement three-tier fallback: external path -> bundled -> inline
- Update AIAssistantServiceCEImpl to use dynamic prompts
- Increase max_tokens from 4096 to 8192 for longer responses
- Increase response truncation from 100K to 200K chars
- Add appsmith.ai.references.path configuration property

The reference files contain Appsmith-specific patterns, best practices,
and common issues to help the AI provide more accurate, context-aware
responses.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Documents how users can customize AI reference files:
- Docker volume mount
- Docker Compose
- Kubernetes ConfigMap
- Environment variable for custom path
- File format guidelines
- Fallback behavior explanation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Replace per-editor AI panels with single global side panel
- Add GlobalAISidePanel component with scrollable responses and resizable input
- Update CodeEditor to dispatch openAIPanelWithContext action
- Add editor context tracking (mode, entity, cursor position)
- Auto-close panel on route navigation
- Fix AI selector state path and add missing reducer properties
- Add quick actions (Explain, Fix Errors, Refactor, Add Comments)
- Support conversation history display with code block rendering

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add getReferenceFilesInfo() method to AIReferenceService to detect
  whether external files are being used instead of bundled defaults
- Show "Custom AI Context Files Active" notice on AI Configuration page
  when external reference files are detected
- Refactor AI settings page with reusable TestResultDisplay and
  ApiKeyTestResult components, reducing code duplication

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…rotection

- Add model dropdown that auto-fetches available models after successful connection test
- Add context size preset buttons (4K, 8K, 16K, 32K, 128K) with custom input option
- Add POST /ai-config/fetch-models endpoint to query Ollama's /api/tags
- Add localLlmModel field to AIConfigDTO and OrganizationConfigurationCE
- Fix SSRF vulnerabilities by using WebClientUtils with IP filtering
- Block requests to internal IPs and cloud metadata endpoints (169.254.169.254)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…sage visibility

- Clear AI messages when switching between editor contexts (JS to Query)
- Fix user message bubble contrast by using subtle background with border
- Add security audit document to gitignore

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Summary of fixes:

- Fix Ollama connection: properly construct /api/chat URL when admin
  provides a base URL (e.g. http://localhost:11434) instead of the
  full endpoint path

- Fix request timeouts: increase Axios timeout from 20s to 180s for
  AI requests, and increase nginx proxy_read_timeout to 180s, since
  LLM model loading (cold start) can take 60-90+ seconds

- Add Microsoft Copilot (Azure OpenAI) support: new copilotEndpoint
  field in AI config, dedicated callCopilotAPI method, and admin UI
  for configuring the Azure OpenAI endpoint URL

- Bypass SSRF protection for admin-configured LLM endpoints: create
  custom WebClient instances for LOCAL_LLM and COPILOT providers to
  avoid blocking localhost/private network requests

- Improve error messages: surface actual error details instead of
  generic "Failed to get AI response", with specific messages for
  timeout, connection refused, model not found, and auth errors

- Add "Clear Chat" button to Quick Actions in all AI panels (CE and
  EE AISidePanel, GlobalAISidePanel) so users can easily clear the
  conversation history

- Fix Ollama test connection: use GET /api/tags instead of POST to
  avoid 404 when testing connection without a specific model

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolve conflicts by keeping both AI assistant and favorites features
in sagas, controllers, and service layers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the "MS Copilot" AI provider with "Azure OpenAI" across the full
stack. Users now provide endpoint, deployment name, and API key — the
system constructs the Azure OpenAI URL internally. Existing COPILOT
configurations are migrated at read time with no DB migration needed.

Backend: add AZURE_OPENAI enum, domain fields, real API test endpoint,
and URL construction in AI service. Frontend: new 3-field config form,
updated provider dropdown, save/load/test logic.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…OpenAI

Azure OpenAI newer models (GPT-5.2, o1, o3) require max_completion_tokens
instead of max_tokens and a specific api-version query parameter. Both were
previously hardcoded. This adds them as configurable fields in the AI admin
settings UI with sensible defaults (api-version: 2024-12-01-preview,
max_completion_tokens: 16384).

Also removes hardcoded temperature from Azure requests (unsupported by
reasoning models) and adds error body logging for Azure API failures.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@salevine salevine requested a review from sharat87 as a code owner March 3, 2026 18:01
@salevine salevine added the Enhancement New feature or request label Mar 3, 2026
@salevine salevine requested a review from abhvsn as a code owner March 3, 2026 18:01
@salevine salevine added the Enhancement New feature or request label Mar 3, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 3, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds an AI assistant feature: frontend UI and Redux flows, sagas calling new APIs, backend controllers/services for multi-provider LLM integration, DTOs and domain fields, migration to add org flag, AI reference resources and admin settings, plus extensive documentation and security/audit artifacts.

Changes

Cohort / File(s) Summary
Documentation & Planning
\.cursor/rules/..., cursorrules, CLAUDE.md, AI_ADMIN_CONTROL_ANALYSIS.md, SECURITY_*.md, SERVER_SIDE_PROXY_PLAN.md, docs/AI_PROMPTS_REFERENCE.md, .cursor/plans/*, .claude/*
New project guides, security audits, implementation plans, prompt references, and tooling hooks. Pure documentation and planning artifacts.
Frontend — Redux & Sagas
app/client/src/ce/actions/aiAssistantActions.ts, app/client/src/ce/reducers/aiAssistantReducer.ts, app/client/src/ce/selectors/aiAssistantSelectors.ts, app/client/src/ce/sagas/AIAssistantSagas.ts, app/client/src/ce/constants/ReduxActionConstants.tsx, app/client/src/ce/sagas/index.tsx, app/client/src/ce/reducers/index.tsx
New AI action creators, reducer, selectors, saga listeners for fetching AI responses and loading AI settings; constants and root saga/reducer registration added.
Frontend — UI Components & Integration
app/client/src/ce/components/.../GPT/*, app/client/src/ce/components/.../GlobalAISidePanel/*, app/client/src/ee/components/.../GPT/*, app/client/src/components/editorComponents/CodeEditor/*, app/client/src/components/editorComponents/form/*, app/client/src/ce/components/.../AskAIButton.tsx
Adds AISidePanel, GlobalAISidePanel, AIEditorLayout, AskAIButton, trigger/context logic, code insertion actions, quick prompts, and editor integration; mode-aware context extraction and UI wiring.
Frontend — API, Admin & Pages
app/client/src/ce/api/OrganizationApi.ts, app/client/src/ce/api/UserApi.tsx, app/client/src/pages/AdminSettings/AI/*, app/client/src/ce/pages/AdminSettings/config/*, app/client/src/pages/AppIDE/layouts/*, app/client/src/sagas/ActionSagas.ts
Client API methods for AI config, model fetching, test endpoints; Admin AISettings page and config registration; layout inclusion of GlobalAISidePanel; action saga changes to open AI panel.
Backend — Services & Reference Loading
app/server/.../services/ce/AIAssistantServiceCEImpl.java, app/server/.../services/AIAssistantServiceCE.java, app/server/.../services/ce/AIReferenceServiceCEImpl.java, app/server/.../services/ce/AIReferenceServiceCE.java, app/server/.../services/AIReferenceService.java, app/server/.../services/AIReferenceServiceImpl.java
New CE AI assistant implementation supporting multiple providers (LOCAL_LLM, AZURE_OPENAI, COPILOT, CLAUDE, OPENAI), prompt/response handling, and reference-content service with fallback and caching.
Backend — Controllers, DTOs & Enums
app/server/.../controllers/ce/OrganizationControllerCE.java, app/server/.../controllers/ce/UserControllerCE.java, app/server/.../controllers/OrganizationController.java, app/server/.../controllers/UserController.java, app/server/.../dtos/AI*.java, app/server/.../domains/AIProvider.java
New endpoints for AI config CRUD, testing, model fetch; POST /ai-assistant/request endpoint; DTOs for config, editor context, messages, requests; AIProvider enum; controller constructors updated for new dependencies.
Backend — Domain & Migration
app/server/.../domains/ce/OrganizationConfigurationCE.java, app/server/.../migrations/db/ce/Migration075AddIsAIAssistantEnabledToOrganizationConfiguration.java
OrganizationConfiguration extended with AI provider keys, endpoints, enable flag and related fields; migration adds isAIAssistantEnabled with safe defaults.
Backend — Resources & Config
app/server/.../resources/ai-references/*, app/server/.../resources/application-ce.properties, .gitignore
Added AI reference markdowns (javascript/sql/graphql/common issues), README, application property for references path, and gitignore entry for an audit PDF.
Misc — Plugin/Plans/Other
.cursor/plans/*, FixMSSQLReadOnly.md, app/client/.../ce/sagas/userSagas.tsx
Various planning docs, MCP plugin plan, MSSQL read-only note, small saga null-safety tweak.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Client as Frontend UI
    participant Redux as Redux State
    participant Saga as AI Saga
    participant API as Client API
    participant Server as Backend Controller/Service
    participant Provider as LLM Provider

    User->>Client: open panel / send prompt
    Client->>Redux: dispatch FETCH_AI_RESPONSE
    Redux->>Saga: saga handles FETCH_AI_RESPONSE
    Saga->>API: POST /v1/users/ai-assistant/request (AIRequestDTO)
    API->>Server: request forwarded to UserControllerCE.requestAIResponse
    Server->>Server: validate + route to AIAssistantServiceCEImpl
    Server->>Provider: provider-specific HTTP call (OpenAI/Claude/Azure/Local)
    Provider-->>Server: LLM response
    Server->>Server: extract/truncate/format response
    Server-->>API: return response payload
    API-->>Saga: response received
    Saga->>Redux: dispatch FETCH_AI_RESPONSE_SUCCESS
    Redux->>Client: state updates -> render response (code blocks, actions)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🤖✨ Panels slide, prompts take flight,
Redux holds whispers through the night,
Java routes and providers align,
Code blocks copy, editors shine,
An assistant lands — helpful, bright.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/enable-ai

@salevine
Copy link
Contributor Author

salevine commented Mar 3, 2026

/build-deploy-preview skip-tests=true

@github-actions
Copy link

github-actions bot commented Mar 3, 2026

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/22636345472.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .

Disable AI in the API editor by removing JSON mode from EE isAIEnabled,
remove Insert/Replace buttons from AISidePanel (copy-only workflow for v1),
and delete unused AIEditorLayout files (CE and EE).
- Extract shared AI panel components (styled components, helpers,
  constants, animations) into ce/components/editorComponents/GPT/shared/
  reducing ~1100 lines of duplication across CE AISidePanel,
  EE AISidePanel, and GlobalAISidePanel
- Create AIConstants.java to consolidate 6 duplicated default values
  between AIAssistantServiceCEImpl and OrganizationControllerCE
- Wrap blocking InetAddress.getByName() in Mono.fromCallable with
  boundedElastic scheduler to avoid blocking Netty event loop threads
- Add MongoDB/NoSQL heuristics to extractReferencedTableNames for
  db.collection.method() patterns and JSON string value matching
@linear
Copy link

linear bot commented Mar 13, 2026

@subrata71
Copy link
Collaborator

/build-deploy-preview skip-tests=true

@github-actions
Copy link

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23045743891.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .
base-image-tag: .

@github-actions
Copy link

Deploy-Preview-URL: https://ce-41590.dp.appsmith.com

The AI Assistant was generating generic SQL queries because the
datasource schema was never proactively fetched — it only read from
the Redux cache which is empty until the user manually browses the
schema tab. Now enrichContextWithSchema dispatches
fetchDatasourceStructure and waits for the result before proceeding.

The AskAIButton was visible in every CodeEditor (including API editor
JSON fields) because it only checked a global Redux flag, bypassing
the mode-based isAIEnabled gate. Now the button is gated by
this.AIEnabled. Also removed isJSONMode from DynamicTextField's
AIAssisted computation for consistency.
@subrata71
Copy link
Collaborator

/build-deploy-preview skip-tests=true

@github-actions
Copy link

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23048181902.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .
base-image-tag: .

- Fix TS2769 in AIAssistantSagas: use inline type for take()
  predicates instead of ReduxAction<T> which is incompatible with
  redux-saga's Predicate<Action<string>>
- Fix prettier violations in aiSchemaSerializer.ts: collapse
  multi-line RegExp constructors to single lines
- Fix restricted-import lint error: create EE re-export for
  ce/components/editorComponents/GPT/shared and update imports
  in GlobalAISidePanel and EE AISidePanel to use ee/ path
@subrata71
Copy link
Collaborator

/build-deploy-preview skip-tests=true

@github-actions
Copy link

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23052520196.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .
base-image-tag: .

@github-actions
Copy link

Deploy-Preview-URL: https://ce-41590.dp.appsmith.com

@salevine salevine requested a review from riodeuno as a code owner March 18, 2026 20:48
@salevine
Copy link
Contributor Author

/build-deploy-preview skip-tests=true

@github-actions
Copy link

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23267426495.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .
base-image-tag: .

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
salevine and others added 2 commits March 18, 2026 17:30
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Import Components type from react-markdown and use Partial<Components>
to properly type MARKDOWN_COMPONENTS, removing the incompatible
Record<string, unknown> index signature.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@salevine
Copy link
Contributor Author

/build-deploy-preview skip-tests=true

@github-actions
Copy link

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23269649542.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .
base-image-tag: .

@github-actions
Copy link

Deploy-Preview-URL: https://ce-41590.dp.appsmith.com

Scope inline code styling (background, padding, border-radius) to only
code elements not inside pre blocks, preventing the light background
from bleeding into dark-themed fenced code blocks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@salevine
Copy link
Contributor Author

/build-deploy-preview skip-tests=true

@github-actions
Copy link

Deploying Your Preview: https://github.com/appsmithorg/appsmith/actions/runs/23273899909.
Workflow: On demand build Docker image and deploy preview.
skip-tests: true.
env: ``.
PR: 41590.
recreate: .
base-image-tag: .

@github-actions
Copy link

Deploy-Preview-URL: https://ce-41590.dp.appsmith.com

@github-actions
Copy link

Failed server tests

  • com.appsmith.server.git.ServerSchemaMigrationEnforcerTest#saveGitRepo_ImportAndThenExport_diffOccurs

@github-actions
Copy link

Failed server tests

  • com.appsmith.server.helpers.GitUtilsTest#isRepoPrivate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants