-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Describe the feature or problem you'd like to solve
Add support for configuring a default model for /fleet spawned subagents at the global and/or project level.
Proposed solution
Today, if a user wants /fleet subagents to use a specific model, they need to restate that preference in prompts, instructions, or agent definitions.
That creates repeated prompt boilerplate and makes model selection harder to manage across workflows, especially when users want to change their preferred default later.
Allow users to configure the default model used by /fleet subagents through:
- a global setting for all /fleet usage
- an optional project-level override for repository-specific behavior
- This would let users control /fleet model selection without needing to repeat the model in prompts or encode it into shared instructions/agents.
If a default /fleet model is configured, spawned subagents should use that model unless the command explicitly overrides it.
Example prompts or workflows
-
Low-cost parallel research by default
A user configures /fleet to use a cheaper fast model globally, then runs /fleet investigate failing tests in these 6 files. All spawned subagents automatically use that model. -
Project-specific override
A repository is configured so /fleet update deprecated API usage across the codebase uses a stronger reasoning model for migration work, without needing repo-specific prompt boilerplate. -
Consistent team behavior
A team sets a shared project default so /fleet review these changed files for bugs, performance issues, and missing tests behaves consistently across contributors. -
Easy rollout of newer models
When a newer low-cost model becomes available, a user updates one config value and continues using existing /fleet workflows without editing saved prompts or instructions. -
Cleaner reusable prompts
Commands like /fleet summarize the implementation options for these 4 modules can stay focused on the task itself instead of repeating model-selection guidance.
Additional context
This seems especially useful now that model options are evolving quickly and users may want to optimize /fleet usage for cost, speed, or reasoning quality depending on the project.