-
Notifications
You must be signed in to change notification settings - Fork 26
Description
Description
The documentation currently does not explain how to configure and use local Ollama models with fast-agent.
Through experimentation, I discovered that the correct way to reference an Ollama model is by using the generic. prefix in the default_model field of fastagent.config.yaml.
For example:
default_model: generic.llama3This works out of the box, even without a models: section. However, this behavior is undocumented in:
The README
The output of fast-agent setup
The comments in the generated config file
Steps to reproduce
Run fast-agent setup
Try to use llama3 or qwen3:8b via Ollama
Use --model=qwen3:8b or default_model: qwen3:8b — this fails
Use --model=generic.qwen3:8b or default_model: generic.qwen3:8b — this works, but isn’t documented
(i'm using qwen3:8b, but the same goes for any running ollama model)
Suggested fix
Please update the documentation to:
Mention the generic. prefix explicitly as the provider for Ollama
Provide working examples for default_model, commented out sections for different providers
Clarify that no models: block is required in this case
Thanks for the great project!