Skip to content

feat: add MiniMax as LLM provider with auto-detection#390

Open
octo-patch wants to merge 1 commit intousestrix:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with auto-detection#390
octo-patch wants to merge 1 commit intousestrix:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax M2.7 and M2.7-highspeed models to STRIX_MODEL_MAP for strix/ prefix shortcuts (e.g. strix/minimax-m2.7)
  • Auto-detect MINIMAX_API_KEY environment variable and set the MiniMax API base URL (https://api.minimax.io/v1) when a MiniMax model is selected
  • Add MiniMax provider documentation page (docs/llm-providers/minimax.mdx) with setup instructions and available models
  • Update provider overview and README with MiniMax as a recommended model

Details

MiniMax provides powerful large language models (M2.7, M2.7-highspeed) with up to 1M token context windows through an OpenAI-compatible API. This PR integrates MiniMax as a first-class LLM provider in Strix.

Usage:

# Direct configuration
export STRIX_LLM="openai/MiniMax-M2.7"
export LLM_API_KEY="your-minimax-api-key"
export LLM_API_BASE="https://api.minimax.io/v1"

# Or with auto-detection (API base URL auto-set)
export STRIX_LLM="openai/MiniMax-M2.7"
export MINIMAX_API_KEY="your-minimax-api-key"

Changes:

  • strix/llm/utils.py: Add MiniMax models to STRIX_MODEL_MAP
  • strix/config/config.py: Auto-detect MINIMAX_API_KEY and set API base URL for MiniMax models
  • docs/llm-providers/minimax.mdx: New provider documentation page
  • docs/llm-providers/overview.mdx: Add MiniMax to provider overview table and card grid
  • README.md: Add MiniMax to recommended models list

Test plan

  • 17 unit tests covering model mapping, model resolution, MiniMax detection, config auto-detection, and LLMConfig initialization
  • 3 integration tests verifying end-to-end completion, streaming, and auto-detection with real MiniMax API
  • All 20 tests pass

Add MiniMax M2.7 and M2.7-highspeed models to the STRIX_MODEL_MAP for
strix/ prefix shortcuts, auto-detect MINIMAX_API_KEY and set the MiniMax
API base URL (https://api.minimax.io/v1) when a MiniMax model is selected.

Includes provider documentation page, overview/README updates, 17 unit
tests and 3 integration tests covering model mapping, config resolution,
API key auto-detection, and end-to-end streaming completion.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 23, 2026

Greptile Summary

This PR adds MiniMax as a first-class LLM provider by registering two models in STRIX_MODEL_MAP, adding MINIMAX_API_KEY auto-detection in resolve_llm_config(), and shipping documentation and README updates. The integration follows the same pattern used for other third-party providers and is well-covered by unit tests.

Two issues need attention before merge:

  • strix/minimax-* + MINIMAX_API_KEY interaction (strix/config/config.py): The MiniMax auto-detection block fires unconditionally, including for strix/minimax-* models. For those models api_base is already set to the Strix gateway (STRIX_API_BASE), so the base-URL override is skipped correctly — but the api_key guard will still inject MINIMAX_API_KEY as the credential for the Strix-gateway call. Unless the gateway explicitly expects a raw MiniMax provider key, this produces authentication failures for any user who has MINIMAX_API_KEY set but no LLM_API_KEY.
  • Environment leak in integration test (tests/llm/test_minimax_integration.py): test_minimax_config_auto_detection sets STRIX_LLM via os.environ but does not restore its original value in the finally block, which can silently pollute the environment for tests that run afterward in the same process.

Confidence Score: 3/5

  • Two logic bugs need to be resolved before merging: an environment leak in the integration tests and a potential auth regression for strix/minimax-* users.
  • The model-map addition and docs are clean. The auto-detection logic has a guard (if not api_key) that prevents overwriting an explicit key, which is good. However, the unconditional execution of the MiniMax block for strix/ prefix models risks injecting a provider-specific key into gateway calls, which is a plausible production auth failure. The test teardown omission can cause flaky integration runs in any environment where MINIMAX_API_KEY is present.
  • strix/config/config.py and tests/llm/test_minimax_integration.py

Important Files Changed

Filename Overview
strix/config/config.py Adds MiniMax auto-detection to resolve_llm_config(); the detection block runs for strix/minimax-* models too, which causes the raw MINIMAX_API_KEY to be used as the credential for Strix-gateway calls.
strix/llm/utils.py Adds minimax-m2.7 and minimax-m2.7-highspeed entries to STRIX_MODEL_MAP; clean, consistent with existing entries.
tests/llm/test_minimax_integration.py test_minimax_config_auto_detection sets STRIX_LLM via raw os.environ but never restores it in the finally block, leaking state to subsequent tests in the same process.
tests/llm/test_minimax.py 17 unit tests covering model map entries, strix/ shorthand resolution, _is_minimax_model detection, resolve_llm_config auto-detection, and LLMConfig initialisation; all use monkeypatch correctly.
docs/llm-providers/minimax.mdx New provider doc page; accurate, covers both manual and auto-detection setup paths and available models.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: tests/llm/test_minimax_integration.py
Line: 60-76

Comment:
**`STRIX_LLM` not restored in test teardown**

`os.environ["STRIX_LLM"]` is set on line 63 but is never restored in the `finally` block. If this test runs in an environment where `MINIMAX_API_KEY` is present, `STRIX_LLM` will remain as `"openai/MiniMax-M2.7"` for any subsequently executed test, potentially causing failures in other integration tests that rely on a different `STRIX_LLM` value.

The other two integration tests in this file use a `monkeypatch` fixture (which auto-restores on teardown), but this function bypasses it and does not capture or restore the original value of `STRIX_LLM`.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: strix/config/config.py
Line: 215-220

Comment:
**MiniMax key may be injected into Strix-gateway calls**

When `STRIX_LLM="strix/minimax-m2.7"` is configured, the earlier block sets `api_base = STRIX_API_BASE` (the Strix gateway). The new MiniMax detection block then runs unconditionally: the `api_base` override is correctly skipped (since it's already set), but if no `LLM_API_KEY` is present, `MINIMAX_API_KEY` is picked up and used as the `api_key` for a call that goes to the Strix gateway.

Unless the Strix gateway is designed to receive and forward a raw provider key, this will cause authentication failures for `strix/minimax-*` users who only have `MINIMAX_API_KEY` set. Consider adding a `not model.startswith("strix/")` guard before the MiniMax detection block so it only fires for direct provider calls.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "feat: add MiniMax as LLM provider with a..." | Re-trigger Greptile

Comment on lines +60 to +76


@pytest.mark.asyncio()
async def test_minimax_config_auto_detection():
"""Test that MINIMAX_API_KEY auto-detection works end-to-end."""
api_key = os.environ.get("MINIMAX_API_KEY", "")
orig_llm_key = os.environ.pop("LLM_API_KEY", None)
orig_llm_base = os.environ.pop("LLM_API_BASE", None)
os.environ["STRIX_LLM"] = "openai/MiniMax-M2.7"

try:
config = LLMConfig()
assert config.api_key == api_key
assert config.api_base == "https://api.minimax.io/v1"
finally:
if orig_llm_key is not None:
os.environ["LLM_API_KEY"] = orig_llm_key
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 STRIX_LLM not restored in test teardown

os.environ["STRIX_LLM"] is set on line 63 but is never restored in the finally block. If this test runs in an environment where MINIMAX_API_KEY is present, STRIX_LLM will remain as "openai/MiniMax-M2.7" for any subsequently executed test, potentially causing failures in other integration tests that rely on a different STRIX_LLM value.

The other two integration tests in this file use a monkeypatch fixture (which auto-restores on teardown), but this function bypasses it and does not capture or restore the original value of STRIX_LLM.

Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/llm/test_minimax_integration.py
Line: 60-76

Comment:
**`STRIX_LLM` not restored in test teardown**

`os.environ["STRIX_LLM"]` is set on line 63 but is never restored in the `finally` block. If this test runs in an environment where `MINIMAX_API_KEY` is present, `STRIX_LLM` will remain as `"openai/MiniMax-M2.7"` for any subsequently executed test, potentially causing failures in other integration tests that rely on a different `STRIX_LLM` value.

The other two integration tests in this file use a `monkeypatch` fixture (which auto-restores on teardown), but this function bypasses it and does not capture or restore the original value of `STRIX_LLM`.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +215 to +220
# Auto-detect MiniMax provider: use MINIMAX_API_KEY and set base URL
if _is_minimax_model(model):
if not api_key:
api_key = os.getenv("MINIMAX_API_KEY")
if not api_base:
api_base = "https://api.minimax.io/v1"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 MiniMax key may be injected into Strix-gateway calls

When STRIX_LLM="strix/minimax-m2.7" is configured, the earlier block sets api_base = STRIX_API_BASE (the Strix gateway). The new MiniMax detection block then runs unconditionally: the api_base override is correctly skipped (since it's already set), but if no LLM_API_KEY is present, MINIMAX_API_KEY is picked up and used as the api_key for a call that goes to the Strix gateway.

Unless the Strix gateway is designed to receive and forward a raw provider key, this will cause authentication failures for strix/minimax-* users who only have MINIMAX_API_KEY set. Consider adding a not model.startswith("strix/") guard before the MiniMax detection block so it only fires for direct provider calls.

Prompt To Fix With AI
This is a comment left during a code review.
Path: strix/config/config.py
Line: 215-220

Comment:
**MiniMax key may be injected into Strix-gateway calls**

When `STRIX_LLM="strix/minimax-m2.7"` is configured, the earlier block sets `api_base = STRIX_API_BASE` (the Strix gateway). The new MiniMax detection block then runs unconditionally: the `api_base` override is correctly skipped (since it's already set), but if no `LLM_API_KEY` is present, `MINIMAX_API_KEY` is picked up and used as the `api_key` for a call that goes to the Strix gateway.

Unless the Strix gateway is designed to receive and forward a raw provider key, this will cause authentication failures for `strix/minimax-*` users who only have `MINIMAX_API_KEY` set. Consider adding a `not model.startswith("strix/")` guard before the MiniMax detection block so it only fires for direct provider calls.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant