Description
After upgrading to Copilot CLI 1.0.41, when using --effort high with a BYOK/custom provider, the actual request is still sent with reasoning_effort: "high", but the statusline receives model.display_name as gpt-5.5 (medium).
The same setup works correctly in Copilot CLI 1.0.40, where the statusline displays gpt-5.5 (high).
This looks like a regression in the statusline/UI display state: the actual request effort and the effort shown through the UI/statusline are out of sync.
Environment
- Copilot CLI: 1.0.41
- OS: macOS
- Provider: BYOK / custom OpenAI-compatible provider
- Wire API:
responses
- Model:
gpt-5.5
Relevant environment variables:
COPILOT_PROVIDER_BASE_URL=http://<custom-provider>/v1
COPILOT_PROVIDER_API_KEY=<redacted>
COPILOT_PROVIDER_WIRE_API=responses
COPILOT_PROVIDER_TYPE=openai
COPILOT_MODEL=gpt-5.5
COPILOT_PROVIDER_MODEL_ID=gpt-5.5
COPILOT_PROVIDER_WIRE_MODEL=gpt-5.5
Launch command:
Expected behavior
The statusline payload should reflect the reasoning effort that is actually being used.
For example, with --effort high, the statusline payload should contain:
{
"model": {
"id": "gpt-5.5",
"display_name": "gpt-5.5 (high)"
}
}
Actual behavior
In 1.0.41, the statusline displays:
That means the statusline payload display name is effectively:
{
"model": {
"id": "gpt-5.5",
"display_name": "gpt-5.5 (medium)"
}
}
However, the debug log shows that the actual request is using high:
Using custom provider: type=openai, baseUrl=http://<custom-provider>/v1, wireApi=responses
"model": "gpt-5.5"
"defaultReasoningEffort": "high"
"reasoning_effort": "high"
So the issue appears to be limited to the UI/statusline display state, not the actual request.
Comparison test
Using the same environment, same BYOK provider, same model, and same --effort high flag, only changing the Copilot CLI version:
| Version |
Actual request reasoning_effort |
Statusline model.display_name |
| 1.0.40 |
high |
gpt-5.5 (high) |
| 1.0.41 |
high |
gpt-5.5 (medium) |
Captured statusline payload from 1.0.40:
{
"model": {
"id": "gpt-5.5",
"display_name": "gpt-5.5 (high)"
}
}
Captured request settings from the 1.0.41 debug log:
"defaultReasoningEffort": "high"
"reasoning_effort": "high"
But the 1.0.41 statusline shows:
Description
After upgrading to Copilot CLI 1.0.41, when using
--effort highwith a BYOK/custom provider, the actual request is still sent withreasoning_effort: "high", but the statusline receivesmodel.display_nameasgpt-5.5 (medium).The same setup works correctly in Copilot CLI 1.0.40, where the statusline displays
gpt-5.5 (high).This looks like a regression in the statusline/UI display state: the actual request effort and the effort shown through the UI/statusline are out of sync.
Environment
responsesgpt-5.5Relevant environment variables:
Launch command:
Expected behavior
The statusline payload should reflect the reasoning effort that is actually being used.
For example, with
--effort high, the statusline payload should contain:{ "model": { "id": "gpt-5.5", "display_name": "gpt-5.5 (high)" } }Actual behavior
In 1.0.41, the statusline displays:
That means the statusline payload display name is effectively:
{ "model": { "id": "gpt-5.5", "display_name": "gpt-5.5 (medium)" } }However, the debug log shows that the actual request is using
high:So the issue appears to be limited to the UI/statusline display state, not the actual request.
Comparison test
Using the same environment, same BYOK provider, same model, and same
--effort highflag, only changing the Copilot CLI version:reasoning_effortmodel.display_namehighgpt-5.5 (high)highgpt-5.5 (medium)Captured statusline payload from 1.0.40:
{ "model": { "id": "gpt-5.5", "display_name": "gpt-5.5 (high)" } }Captured request settings from the 1.0.41 debug log:
But the 1.0.41 statusline shows: