feat: add MiniMax as first-class LLM provider#74
feat: add MiniMax as first-class LLM provider#74octo-patch wants to merge 2 commits intodtyq:masterfrom
Conversation
Add MiniMax AI as a built-in LLM service provider, leveraging its OpenAI-compatible API. MiniMax offers high-performance M2.7 and M2.5 series models with million-token context, tool calling and deep thinking capabilities. Changes: - Register MiniMax in ProviderCode enum with OpenAIModel implementation - Add MiniMaxLlm template ID for provider-category mapping - Add MiniMax provider initialization data with bilingual descriptions - Add MiniMax to frontend ServiceProvider enum with default API URL - Create LLMMiniMaxProvider connectivity test class - Add unit tests for enum, template ID, initializer and connectivity
backend/magic-service/app/Domain/Provider/Service/ConnectivityTest/LLM/LLMMiniMaxProvider.php
Show resolved
Hide resolved
JiwaniZakir
left a comment
There was a problem hiding this comment.
In LLMMiniMaxProvider::connectivityTestByModel, the $modelVersion parameter is accepted but never actually used — the method only calls fetchModels regardless of which model is being tested. This means a connectivity check won't catch model-specific access issues (e.g., a valid API key that lacks access to a specific model variant), which contradicts the intent implied by the method signature. Other providers that do a real completion probe with the given model version catch exactly this class of failure.
Additionally, $apiBase is hardcoded to https://api.minimax.io/v1 and is never read from $serviceProviderConfig. If ProviderConfigItem exposes a custom base URL (as it does for the generic OpenAI-compatible provider), MiniMax users have no way to point at a proxy or an alternative endpoint — worth either documenting as intentional or adding support via $serviceProviderConfig->getApiBase().
Finally, in ProviderCode::getImplementation(), the explicit self::MiniMax => OpenAIModel::class mapping is immediately followed by default => OpenAIModel::class, making it a no-op. Since the intent is to signal that MiniMax uses the OpenAI-compatible path, a short inline comment would make that clearer than a redundant match arm.
…te from DeepSeek - Replace fetchModels() with testChatCompletion() that actually uses the modelVersion parameter to send a lightweight chat completion request, matching the LLMVolcengineProvider pattern - Apply MiniMax-specific temperature clamping (0.01) since MiniMax requires temperature strictly in (0.0, 1.0] - Add class-level and method-level PHPDoc explaining the design choices Fixes review comments about unused modelVersion parameter and code duplication with LLMDeepSeekProvider. Co-Authored-By: Octopus <liyuan851277048@icloud.com>
|
Addressed the review feedback in de247f6:
Thanks for the thorough review! |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| ]); | ||
|
|
||
| return json_decode($response->getBody()->getContents(), true); | ||
| } |
There was a problem hiding this comment.
Tests mock nonexistent method, bypassing HTTP mocks entirely
High Severity
The test subclasses override a fetchModels method that does not exist on LLMMiniMaxProvider. The actual provider calls testChatCompletion, so the mock HTTP client is never used. All three tests (testConnectivityTestSucceedsWithValidApiKey, testConnectivityTestFailsWithInvalidApiKey, testConnectivityTestFailsOnNetworkError) will make real HTTP requests to api.minimax.io instead of using the mocked responses, making them flaky and not actually testing the intended behavior. The override needs to target testChatCompletion instead of fetchModels.
Additional Locations (2)
|
|
||
| $response = $provider->connectivityTestByModel($config, 'MiniMax-M2.7'); | ||
|
|
||
| $this->assertFalse($response->getStatus()); |
There was a problem hiding this comment.
Tests call nonexistent getStatus() instead of isStatus()
Medium Severity
The tests call $response->getStatus() on a ConnectResponse object, but ConnectResponse only defines isStatus() for the boolean $status property. There is no getStatus() method and no __call magic method anywhere in the class hierarchy (BaseObject → AbstractObject → AbstractEntity → ConnectResponse), so every test assertion using getStatus() will fail at runtime with an undefined method error.
Additional Locations (2)
|
The three issues flagged by Cursor are all valid and should be addressed before merging. The duplication between |


Summary
Add MiniMax as a built-in LLM service provider for Magic. MiniMax offers high-performance AI models (M2.7, M2.5 series) with million-token context windows, tool calling, and deep thinking capabilities, all served via an OpenAI-compatible API at
https://api.minimax.io/v1.Changes
Backend (PHP)
ProviderCode.php: RegisterMiniMaxenum case withOpenAIModelimplementationProviderTemplateId.php: AddMiniMaxLlm = '23'template ID for provider-category mappingServiceProviderInitializer.php: Add MiniMax provider initialization data with bilingual (EN/CN) descriptionsLLMMiniMaxProvider.php: Connectivity test class following existing DeepSeek patternFrontend (TypeScript)
aiModel.ts: AddMiniMaxtoServiceProviderenum with default API URLTests
ProviderCodeMiniMaxTest.php: 12 unit tests for enum, implementation, sort order, template ID mappingLLMMiniMaxProviderTest.php: 5 unit tests for connectivity test with mock HTTP (success, auth error, network error)ServiceProviderInitializerMiniMaxTest.php: 4 unit tests for provider data, translations, sort order uniquenessIntegration Notes
defaultcase inProviderConfigFactoryandgetImplementationConfig()- no special adapter neededMiniMax-M2.7,MiniMax-M2.7-highspeed,MiniMax-M2.5,MiniMax-M2.5-highspeedTest Plan
Note
Medium Risk
Adds a new first-class LLM provider across backend enums/templates, default seed data, and a new connectivity test that makes outbound chat-completions calls; main risk is misconfiguration or unexpected API behavior affecting provider setup/testing.
Overview
Adds MiniMax as a first-class LLM service provider end-to-end. Backend updates introduce
ProviderCode::MiniMax(mapped toOpenAIModel), a newProviderTemplateId::MiniMaxLlm('23'), seed/initializer metadata (bilingual name/description) with adjusted LLMsort_ordervalues, and a newLLMMiniMaxProviderconnectivity test that validates an API key/model via a minimalPOST /chat/completionsrequest.Frontend admin constants add
MiniMaxto theServiceProviderenum with default URLhttps://api.minimax.io/v1. New unit tests cover the enum/template wiring, initializer data integrity, and the MiniMax connectivity test behavior (success/auth/network error paths).Written by Cursor Bugbot for commit de247f6. This will update automatically on new commits. Configure here.