feat: add MiniMax as LLM provider with auto-detection#390
feat: add MiniMax as LLM provider with auto-detection#390octo-patch wants to merge 1 commit intousestrix:mainfrom
Conversation
Add MiniMax M2.7 and M2.7-highspeed models to the STRIX_MODEL_MAP for strix/ prefix shortcuts, auto-detect MINIMAX_API_KEY and set the MiniMax API base URL (https://api.minimax.io/v1) when a MiniMax model is selected. Includes provider documentation page, overview/README updates, 17 unit tests and 3 integration tests covering model mapping, config resolution, API key auto-detection, and end-to-end streaming completion.
Greptile SummaryThis PR adds MiniMax as a first-class LLM provider by registering two models in Two issues need attention before merge:
Confidence Score: 3/5
Important Files Changed
Prompt To Fix All With AIThis is a comment left during a code review.
Path: tests/llm/test_minimax_integration.py
Line: 60-76
Comment:
**`STRIX_LLM` not restored in test teardown**
`os.environ["STRIX_LLM"]` is set on line 63 but is never restored in the `finally` block. If this test runs in an environment where `MINIMAX_API_KEY` is present, `STRIX_LLM` will remain as `"openai/MiniMax-M2.7"` for any subsequently executed test, potentially causing failures in other integration tests that rely on a different `STRIX_LLM` value.
The other two integration tests in this file use a `monkeypatch` fixture (which auto-restores on teardown), but this function bypasses it and does not capture or restore the original value of `STRIX_LLM`.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: strix/config/config.py
Line: 215-220
Comment:
**MiniMax key may be injected into Strix-gateway calls**
When `STRIX_LLM="strix/minimax-m2.7"` is configured, the earlier block sets `api_base = STRIX_API_BASE` (the Strix gateway). The new MiniMax detection block then runs unconditionally: the `api_base` override is correctly skipped (since it's already set), but if no `LLM_API_KEY` is present, `MINIMAX_API_KEY` is picked up and used as the `api_key` for a call that goes to the Strix gateway.
Unless the Strix gateway is designed to receive and forward a raw provider key, this will cause authentication failures for `strix/minimax-*` users who only have `MINIMAX_API_KEY` set. Consider adding a `not model.startswith("strix/")` guard before the MiniMax detection block so it only fires for direct provider calls.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "feat: add MiniMax as LLM provider with a..." | Re-trigger Greptile |
|
|
||
|
|
||
| @pytest.mark.asyncio() | ||
| async def test_minimax_config_auto_detection(): | ||
| """Test that MINIMAX_API_KEY auto-detection works end-to-end.""" | ||
| api_key = os.environ.get("MINIMAX_API_KEY", "") | ||
| orig_llm_key = os.environ.pop("LLM_API_KEY", None) | ||
| orig_llm_base = os.environ.pop("LLM_API_BASE", None) | ||
| os.environ["STRIX_LLM"] = "openai/MiniMax-M2.7" | ||
|
|
||
| try: | ||
| config = LLMConfig() | ||
| assert config.api_key == api_key | ||
| assert config.api_base == "https://api.minimax.io/v1" | ||
| finally: | ||
| if orig_llm_key is not None: | ||
| os.environ["LLM_API_KEY"] = orig_llm_key |
There was a problem hiding this comment.
STRIX_LLM not restored in test teardown
os.environ["STRIX_LLM"] is set on line 63 but is never restored in the finally block. If this test runs in an environment where MINIMAX_API_KEY is present, STRIX_LLM will remain as "openai/MiniMax-M2.7" for any subsequently executed test, potentially causing failures in other integration tests that rely on a different STRIX_LLM value.
The other two integration tests in this file use a monkeypatch fixture (which auto-restores on teardown), but this function bypasses it and does not capture or restore the original value of STRIX_LLM.
Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/llm/test_minimax_integration.py
Line: 60-76
Comment:
**`STRIX_LLM` not restored in test teardown**
`os.environ["STRIX_LLM"]` is set on line 63 but is never restored in the `finally` block. If this test runs in an environment where `MINIMAX_API_KEY` is present, `STRIX_LLM` will remain as `"openai/MiniMax-M2.7"` for any subsequently executed test, potentially causing failures in other integration tests that rely on a different `STRIX_LLM` value.
The other two integration tests in this file use a `monkeypatch` fixture (which auto-restores on teardown), but this function bypasses it and does not capture or restore the original value of `STRIX_LLM`.
How can I resolve this? If you propose a fix, please make it concise.| # Auto-detect MiniMax provider: use MINIMAX_API_KEY and set base URL | ||
| if _is_minimax_model(model): | ||
| if not api_key: | ||
| api_key = os.getenv("MINIMAX_API_KEY") | ||
| if not api_base: | ||
| api_base = "https://api.minimax.io/v1" |
There was a problem hiding this comment.
MiniMax key may be injected into Strix-gateway calls
When STRIX_LLM="strix/minimax-m2.7" is configured, the earlier block sets api_base = STRIX_API_BASE (the Strix gateway). The new MiniMax detection block then runs unconditionally: the api_base override is correctly skipped (since it's already set), but if no LLM_API_KEY is present, MINIMAX_API_KEY is picked up and used as the api_key for a call that goes to the Strix gateway.
Unless the Strix gateway is designed to receive and forward a raw provider key, this will cause authentication failures for strix/minimax-* users who only have MINIMAX_API_KEY set. Consider adding a not model.startswith("strix/") guard before the MiniMax detection block so it only fires for direct provider calls.
Prompt To Fix With AI
This is a comment left during a code review.
Path: strix/config/config.py
Line: 215-220
Comment:
**MiniMax key may be injected into Strix-gateway calls**
When `STRIX_LLM="strix/minimax-m2.7"` is configured, the earlier block sets `api_base = STRIX_API_BASE` (the Strix gateway). The new MiniMax detection block then runs unconditionally: the `api_base` override is correctly skipped (since it's already set), but if no `LLM_API_KEY` is present, `MINIMAX_API_KEY` is picked up and used as the `api_key` for a call that goes to the Strix gateway.
Unless the Strix gateway is designed to receive and forward a raw provider key, this will cause authentication failures for `strix/minimax-*` users who only have `MINIMAX_API_KEY` set. Consider adding a `not model.startswith("strix/")` guard before the MiniMax detection block so it only fires for direct provider calls.
How can I resolve this? If you propose a fix, please make it concise.
Summary
Details
MiniMax provides powerful large language models (M2.7, M2.7-highspeed) with up to 1M token context windows through an OpenAI-compatible API. This PR integrates MiniMax as a first-class LLM provider in Strix.
Usage:
Changes:
Test plan