Feature Summary
Allow project admins and admins to assign a specific LLM integration (provider + model) to each individual prompt within a Prompt Config, enabling different AI features to use different LLM settings within the same project.
Problem Statement
Current Situation
Currently, TestPlanIt supports a flexible prompt configuration system where admins can create PromptConfig sets containing per-feature prompts (e.g., test_case_generation, auto_tag, editor_assistant, etc.). However, the LLM integration (provider, model, and associated settings) is selected at the project level — a single LLM integration applies to all AI features within that project.
This creates a one-size-fits-all constraint: if a project uses GPT-4o for test case generation, it must also use GPT-4o for auto-tagging, the editor assistant, and every other AI feature — even if a cheaper or faster model would be perfectly adequate for some of those tasks.
Pain points:
- High-cost models like
gpt-4o or claude-opus are billed for lightweight tasks like auto-tagging that could run on gpt-4o-mini or claude-haiku
- Teams cannot optimize for speed vs. quality on a per-feature basis (e.g., fast model for inline editor assistant, high-quality model for test case generation from Jira issues)
- Prompt configs cannot be self-contained — they require a separate project-level LLM configuration decision that isn't communicated through the config itself
- There is no way to express "this prompt works best with a specific model" at the admin level when authoring prompt configs
Desired Outcome
Admins should be able to configure which LLM integration (and optionally which specific model) to use for each feature prompt within a PromptConfig. When a project adopts a prompt config, each feature automatically uses the intended LLM — no additional per-feature configuration required. Project admins should also be able to override these per-prompt LLM assignments at the project level if needed.
Proposed Solution
Add an optional llmIntegrationId and optional modelOverride field to PromptConfigPrompt. When a prompt is resolved for execution, the PromptResolver service uses the prompt-level LLM integration if set, falling back to the project's default integration if not. This makes the prompt config self-describing: each prompt carries its own LLM preference alongside its system/user prompt content.
The admin prompt editor (at /admin/prompts) would gain a per-feature LLM selector dropdown, allowing admins to associate a preferred integration and model with each feature prompt. The project AI Models settings page would display this information and allow project admins to override it per-feature if needed.
User Story
As a project admin or admin, I want to assign a different LLM integration and model to each prompt in a Prompt Config so that I can optimize cost, speed, and quality independently for each AI feature — using an expensive model only where it adds value and a cheaper model for routine tasks.
Acceptance Criteria
Design Mockups
In the prompt config editor, each feature's accordion section (currently showing system prompt, user prompt, temperature, and max tokens) would gain a new LLM Integration row:
┌─ Test Case Generation ───────────────────────────────┐
│ LLM Integration: [OpenAI (GPT-4o) ▼] [Model: gpt-4o ▼] │
│ System Prompt: [...] │
│ User Prompt: [...] │
│ Temperature: [0.7] Max Tokens: [2048] │
└──────────────────────────────────────────────────────┘
┌─ Auto Tag ───────────────────────────────────────────┐
│ LLM Integration: [OpenAI (GPT-4o-mini) ▼] [Model: gpt-4o-mini ▼] │
│ System Prompt: [...] │
│ User Prompt: [...] │
│ Temperature: [0.3] Max Tokens: [512] │
└──────────────────────────────────────────────────────┘
Alternative Solutions
Option 1: Per-feature LLM assignment only at project level (no prompt config changes)
Expand the existing LlmFeatureConfig model to be more prominent in the Project AI Models settings UI. This avoids schema changes to PromptConfigPrompt but means the LLM selection lives only at the project level and cannot be authored as part of a reusable prompt config. Teams would need to configure this individually for every project.
Downside: Does not allow prompt configs to carry LLM intent. Every project using the same prompt config would need separate per-feature LLM configuration.
Option 2: Named LLM "roles" instead of direct integration references
Rather than binding a prompt to a specific LLM integration ID, define abstract roles (e.g., high_quality, fast, balanced) and let each project map roles to actual integrations. Prompt configs reference roles, not specific integrations — making them more portable across installations.
Downside: More complex to implement and configure. Likely over-engineered for most use cases. Could be a future enhancement on top of the proposed solution.
Technical Considerations
Dependencies
Performance Impact
Minimal. The PromptResolver already performs a database lookup to resolve the prompt config. Adding the LLM integration ID to the resolved prompt adds no extra query — it's included in the same PromptConfigPrompt fetch. The LlmManager already handles routing to different integrations.
Security Considerations
The per-prompt LLM integration reference must be validated against the set of active integrations. Prompt configs are admin-only resources (read access is open, write is ADMIN only), so the attack surface is the same as the existing prompt config editor. No new privilege escalation vectors are introduced since the LLM integrations themselves are admin-controlled.
Business Value
Priority
Cost optimization is a concrete, recurring concern for teams running AI features at scale. The difference between gpt-4o and gpt-4o-mini for auto-tagging is roughly 15–30x in cost per token. Teams with high test artifact volume will feel this immediately.
Affected User Groups
Primarily Project Managers and Admins who configure AI settings, with indirect benefit to all users through cost reduction and potentially faster responses for lightweight features.
Expected Usage
Configuration is a one-time or infrequent setup activity, but it affects every AI operation performed daily.
Implementation Effort
Schema change is minimal (two nullable fields on PromptConfigPrompt). The main work is updating the PromptResolver service to surface the integration ID, updating LlmManager call sites to use the resolved integration, and building the UI selector in PromptFeatureSection.tsx.
Related Issues/Features
- Related to the existing
LlmFeatureConfig model (per-project, per-feature LLM overrides) — this feature makes similar overrides available at the prompt config authoring level
- Related to the project AI Models settings page (
/projects/settings/[projectId]/ai-models) — override UI would live here
Additional Context
The infrastructure for per-feature LLM selection already partially exists via LlmFeatureConfig (which has llmIntegrationId, model, temperature, and maxTokens fields per project per feature). This feature completes that picture by allowing the prompt config itself — not just each individual project — to carry LLM intent. This is especially valuable in multi-project organizations where a shared prompt config should "just work" with the right LLM for each feature without requiring manual per-project configuration.
The resolution chain would become:
- Project
LlmFeatureConfig override (project admin–level override)
PromptConfigPrompt.llmIntegrationId (prompt config author's recommendation)
- Project's default
ProjectLlmIntegration (existing fallback)
Examples from Other Tools
LangChain and LangSmith allow individual chain/prompt nodes to specify their own model configuration. OpenAI's Assistants API and Anthropic's prompt management tooling similarly allow model selection per "assistant" or "prompt variant" rather than requiring a single global model choice.
Checklist
Feature Summary
Allow project admins and admins to assign a specific LLM integration (provider + model) to each individual prompt within a Prompt Config, enabling different AI features to use different LLM settings within the same project.
Problem Statement
Current Situation
Currently, TestPlanIt supports a flexible prompt configuration system where admins can create
PromptConfigsets containing per-feature prompts (e.g.,test_case_generation,auto_tag,editor_assistant, etc.). However, the LLM integration (provider, model, and associated settings) is selected at the project level — a single LLM integration applies to all AI features within that project.This creates a one-size-fits-all constraint: if a project uses GPT-4o for test case generation, it must also use GPT-4o for auto-tagging, the editor assistant, and every other AI feature — even if a cheaper or faster model would be perfectly adequate for some of those tasks.
Pain points:
gpt-4oorclaude-opusare billed for lightweight tasks like auto-tagging that could run ongpt-4o-miniorclaude-haikuDesired Outcome
Admins should be able to configure which LLM integration (and optionally which specific model) to use for each feature prompt within a
PromptConfig. When a project adopts a prompt config, each feature automatically uses the intended LLM — no additional per-feature configuration required. Project admins should also be able to override these per-prompt LLM assignments at the project level if needed.Proposed Solution
Add an optional
llmIntegrationIdand optionalmodelOverridefield toPromptConfigPrompt. When a prompt is resolved for execution, thePromptResolverservice uses the prompt-level LLM integration if set, falling back to the project's default integration if not. This makes the prompt config self-describing: each prompt carries its own LLM preference alongside its system/user prompt content.The admin prompt editor (at
/admin/prompts) would gain a per-feature LLM selector dropdown, allowing admins to associate a preferred integration and model with each feature prompt. The project AI Models settings page would display this information and allow project admins to override it per-feature if needed.User Story
As a project admin or admin, I want to assign a different LLM integration and model to each prompt in a Prompt Config so that I can optimize cost, speed, and quality independently for each AI feature — using an expensive model only where it adds value and a cheaper model for routine tasks.
Acceptance Criteria
PromptConfigPromptsupports an optional associated LLM integration (and optional model override within that integration)/admin/prompts) shows a per-feature LLM integration selector alongside the existing prompt editing fieldsPromptResolverservice respects the per-prompt LLM assignment and passes the correct integration ID toLlmManagerLlmFeatureConfig)LlmFeatureConfig> prompt-level assignment > project default integrationDesign Mockups
In the prompt config editor, each feature's accordion section (currently showing system prompt, user prompt, temperature, and max tokens) would gain a new LLM Integration row:
┌─ Test Case Generation ───────────────────────────────┐
│ LLM Integration: [OpenAI (GPT-4o) ▼] [Model: gpt-4o ▼] │
│ System Prompt: [...] │
│ User Prompt: [...] │
│ Temperature: [0.7] Max Tokens: [2048] │
└──────────────────────────────────────────────────────┘
┌─ Auto Tag ───────────────────────────────────────────┐
│ LLM Integration: [OpenAI (GPT-4o-mini) ▼] [Model: gpt-4o-mini ▼] │
│ System Prompt: [...] │
│ User Prompt: [...] │
│ Temperature: [0.3] Max Tokens: [512] │
└──────────────────────────────────────────────────────┘
Alternative Solutions
Option 1: Per-feature LLM assignment only at project level (no prompt config changes)
Expand the existing
LlmFeatureConfigmodel to be more prominent in the Project AI Models settings UI. This avoids schema changes toPromptConfigPromptbut means the LLM selection lives only at the project level and cannot be authored as part of a reusable prompt config. Teams would need to configure this individually for every project.Downside: Does not allow prompt configs to carry LLM intent. Every project using the same prompt config would need separate per-feature LLM configuration.
Option 2: Named LLM "roles" instead of direct integration references
Rather than binding a prompt to a specific LLM integration ID, define abstract roles (e.g.,
high_quality,fast,balanced) and let each project map roles to actual integrations. Prompt configs reference roles, not specific integrations — making them more portable across installations.Downside: More complex to implement and configure. Likely over-engineered for most use cases. Could be a future enhancement on top of the proposed solution.
Technical Considerations
Dependencies
PromptConfigPrompt— addllmIntegrationIdFK andmodelOverridestring field)PromptResolverservice update,LlmManagerinvocation update)Performance Impact
Minimal. The
PromptResolveralready performs a database lookup to resolve the prompt config. Adding the LLM integration ID to the resolved prompt adds no extra query — it's included in the samePromptConfigPromptfetch. TheLlmManageralready handles routing to different integrations.Security Considerations
The per-prompt LLM integration reference must be validated against the set of active integrations. Prompt configs are admin-only resources (read access is open, write is ADMIN only), so the attack surface is the same as the existing prompt config editor. No new privilege escalation vectors are introduced since the LLM integrations themselves are admin-controlled.
Business Value
Priority
Cost optimization is a concrete, recurring concern for teams running AI features at scale. The difference between
gpt-4oandgpt-4o-minifor auto-tagging is roughly 15–30x in cost per token. Teams with high test artifact volume will feel this immediately.Affected User Groups
Primarily Project Managers and Admins who configure AI settings, with indirect benefit to all users through cost reduction and potentially faster responses for lightweight features.
Expected Usage
Configuration is a one-time or infrequent setup activity, but it affects every AI operation performed daily.
Implementation Effort
Schema change is minimal (two nullable fields on
PromptConfigPrompt). The main work is updating thePromptResolverservice to surface the integration ID, updatingLlmManagercall sites to use the resolved integration, and building the UI selector inPromptFeatureSection.tsx.Related Issues/Features
LlmFeatureConfigmodel (per-project, per-feature LLM overrides) — this feature makes similar overrides available at the prompt config authoring level/projects/settings/[projectId]/ai-models) — override UI would live hereAdditional Context
The infrastructure for per-feature LLM selection already partially exists via
LlmFeatureConfig(which hasllmIntegrationId,model,temperature, andmaxTokensfields per project per feature). This feature completes that picture by allowing the prompt config itself — not just each individual project — to carry LLM intent. This is especially valuable in multi-project organizations where a shared prompt config should "just work" with the right LLM for each feature without requiring manual per-project configuration.The resolution chain would become:
LlmFeatureConfigoverride (project admin–level override)PromptConfigPrompt.llmIntegrationId(prompt config author's recommendation)ProjectLlmIntegration(existing fallback)Examples from Other Tools
LangChain and LangSmith allow individual chain/prompt nodes to specify their own model configuration. OpenAI's Assistants API and Anthropic's prompt management tooling similarly allow model selection per "assistant" or "prompt variant" rather than requiring a single global model choice.
Checklist