Skip to content

feat: add MiniMax as first-class LLM provider#74

Open
octo-patch wants to merge 2 commits intodtyq:masterfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#74
octo-patch wants to merge 2 commits intodtyq:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 22, 2026

Summary

Add MiniMax as a built-in LLM service provider for Magic. MiniMax offers high-performance AI models (M2.7, M2.5 series) with million-token context windows, tool calling, and deep thinking capabilities, all served via an OpenAI-compatible API at https://api.minimax.io/v1.

Changes

Backend (PHP)

  • ProviderCode.php: Register MiniMax enum case with OpenAIModel implementation
  • ProviderTemplateId.php: Add MiniMaxLlm = '23' template ID for provider-category mapping
  • ServiceProviderInitializer.php: Add MiniMax provider initialization data with bilingual (EN/CN) descriptions
  • LLMMiniMaxProvider.php: Connectivity test class following existing DeepSeek pattern

Frontend (TypeScript)

  • aiModel.ts: Add MiniMax to ServiceProvider enum with default API URL

Tests

  • ProviderCodeMiniMaxTest.php: 12 unit tests for enum, implementation, sort order, template ID mapping
  • LLMMiniMaxProviderTest.php: 5 unit tests for connectivity test with mock HTTP (success, auth error, network error)
  • ServiceProviderInitializerMiniMaxTest.php: 4 unit tests for provider data, translations, sort order uniqueness

Integration Notes

  • MiniMax uses the standard OpenAI-compatible API format, so it naturally falls through to the default case in ProviderConfigFactory and getImplementationConfig() - no special adapter needed
  • Temperature=0 is accepted by the MiniMax API
  • Available models: MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed

Test Plan

  • Unit tests for ProviderCode enum (implementation, sort order, template mapping)
  • Unit tests for LLMMiniMaxProvider connectivity test (mock HTTP)
  • Unit tests for ServiceProviderInitializer data integrity
  • Manual: Configure MiniMax provider in admin UI with API key
  • Manual: Verify chat completion through MiniMax models

Note

Medium Risk
Adds a new first-class LLM provider across backend enums/templates, default seed data, and a new connectivity test that makes outbound chat-completions calls; main risk is misconfiguration or unexpected API behavior affecting provider setup/testing.

Overview
Adds MiniMax as a first-class LLM service provider end-to-end. Backend updates introduce ProviderCode::MiniMax (mapped to OpenAIModel), a new ProviderTemplateId::MiniMaxLlm ('23'), seed/initializer metadata (bilingual name/description) with adjusted LLM sort_order values, and a new LLMMiniMaxProvider connectivity test that validates an API key/model via a minimal POST /chat/completions request.

Frontend admin constants add MiniMax to the ServiceProvider enum with default URL https://api.minimax.io/v1. New unit tests cover the enum/template wiring, initializer data integrity, and the MiniMax connectivity test behavior (success/auth/network error paths).

Written by Cursor Bugbot for commit de247f6. This will update automatically on new commits. Configure here.

Add MiniMax AI as a built-in LLM service provider, leveraging its
OpenAI-compatible API. MiniMax offers high-performance M2.7 and M2.5
series models with million-token context, tool calling and deep
thinking capabilities.

Changes:
- Register MiniMax in ProviderCode enum with OpenAIModel implementation
- Add MiniMaxLlm template ID for provider-category mapping
- Add MiniMax provider initialization data with bilingual descriptions
- Add MiniMax to frontend ServiceProvider enum with default API URL
- Create LLMMiniMaxProvider connectivity test class
- Add unit tests for enum, template ID, initializer and connectivity
Copy link
Copy Markdown

@JiwaniZakir JiwaniZakir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In LLMMiniMaxProvider::connectivityTestByModel, the $modelVersion parameter is accepted but never actually used — the method only calls fetchModels regardless of which model is being tested. This means a connectivity check won't catch model-specific access issues (e.g., a valid API key that lacks access to a specific model variant), which contradicts the intent implied by the method signature. Other providers that do a real completion probe with the given model version catch exactly this class of failure.

Additionally, $apiBase is hardcoded to https://api.minimax.io/v1 and is never read from $serviceProviderConfig. If ProviderConfigItem exposes a custom base URL (as it does for the generic OpenAI-compatible provider), MiniMax users have no way to point at a proxy or an alternative endpoint — worth either documenting as intentional or adding support via $serviceProviderConfig->getApiBase().

Finally, in ProviderCode::getImplementation(), the explicit self::MiniMax => OpenAIModel::class mapping is immediately followed by default => OpenAIModel::class, making it a no-op. Since the intent is to signal that MiniMax uses the OpenAI-compatible path, a short inline comment would make that clearer than a redundant match arm.

…te from DeepSeek

- Replace fetchModels() with testChatCompletion() that actually uses the
  modelVersion parameter to send a lightweight chat completion request,
  matching the LLMVolcengineProvider pattern
- Apply MiniMax-specific temperature clamping (0.01) since MiniMax
  requires temperature strictly in (0.0, 1.0]
- Add class-level and method-level PHPDoc explaining the design choices

Fixes review comments about unused modelVersion parameter and code
duplication with LLMDeepSeekProvider.

Co-Authored-By: Octopus <liyuan851277048@icloud.com>
@octo-patch
Copy link
Copy Markdown
Author

Addressed the review feedback in de247f6:

  1. Duplicate code (@cursor[bot]): LLMMiniMaxProvider is now structurally distinct from LLMDeepSeekProvider. Instead of calling fetchModels() (a GET to /models), it now uses testChatCompletion() which sends a minimal POST to /chat/completions -- matching the LLMVolcengineProvider pattern. It also applies MiniMax-specific temperature clamping (0.01, since MiniMax requires temperature strictly in (0.0, 1.0]).

  2. Unused $modelVersion (@JiwaniZakir): The parameter is now passed through to testChatCompletion() and used as the model field in the chat completion payload, so the test actually validates that the specific model is accessible.

Thanks for the thorough review!

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

]);

return json_decode($response->getBody()->getContents(), true);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests mock nonexistent method, bypassing HTTP mocks entirely

High Severity

The test subclasses override a fetchModels method that does not exist on LLMMiniMaxProvider. The actual provider calls testChatCompletion, so the mock HTTP client is never used. All three tests (testConnectivityTestSucceedsWithValidApiKey, testConnectivityTestFailsWithInvalidApiKey, testConnectivityTestFailsOnNetworkError) will make real HTTP requests to api.minimax.io instead of using the mocked responses, making them flaky and not actually testing the intended behavior. The override needs to target testChatCompletion instead of fetchModels.

Additional Locations (2)
Fix in Cursor Fix in Web


$response = $provider->connectivityTestByModel($config, 'MiniMax-M2.7');

$this->assertFalse($response->getStatus());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests call nonexistent getStatus() instead of isStatus()

Medium Severity

The tests call $response->getStatus() on a ConnectResponse object, but ConnectResponse only defines isStatus() for the boolean $status property. There is no getStatus() method and no __call magic method anywhere in the class hierarchy (BaseObjectAbstractObjectAbstractEntityConnectResponse), so every test assertion using getStatus() will fail at runtime with an undefined method error.

Additional Locations (2)
Fix in Cursor Fix in Web

@JiwaniZakir
Copy link
Copy Markdown

The three issues flagged by Cursor are all valid and should be addressed before merging. The duplication between LLMMiniMaxProvider and LLMDeepSeekProvider suggests these should share a base class or be parameterized by $apiBase rather than copy-pasted. The test issues are more critical — mocking a non-existent fetchModels method means the connectivity test path is never actually exercised, and calling getStatus() instead of isStatus() means the tests would fail or silently pass for the wrong reasons; both need to be fixed to give the test suite any real coverage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants