Add MiniMax as a first-class LLM provider#690
Add MiniMax as a first-class LLM provider#690octo-patch wants to merge 2 commits intogoogle:mainfrom
Conversation
Add support for MiniMax models (M2.7 and M2.7-highspeed) via their OpenAI-compatible Chat Completion API. MiniMax is a leading AI company offering high-performance language models with up to 1M context window. Changes: - Add langfun/core/llms/minimax.py with MiniMax provider class extending OpenAIChatCompletionAPI, including model info, temperature clamping (MiniMax requires temperature > 0), and API key management - Add langfun/core/llms/minimax_test.py with 10 unit tests - Update langfun/core/llms/__init__.py to export MiniMax classes - Update README.md to mention MiniMax in supported providers list
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
JiwaniZakir
left a comment
There was a problem hiding this comment.
In _register_minimax_models(), both model IDs are registered against the base MiniMax class rather than their respective typed subclasses (MiniMaxM27, MiniMaxM27Highspeed). This means lf.LanguageModel.make('MiniMax-M2.7') would return a base MiniMax instance that still requires the model parameter to be set, rather than a pre-configured subclass instance — inconsistent with how other providers like DeepSeek wire up their registrations.
Additionally, the _request_args override in MiniMax clamps temperature == 0.0 to 0.01 since the API requires (0.0, 1.0], but there's no guard on the upper bound — if a caller passes temperature > 1.0, it will be forwarded as-is and likely cause a 4xx from the MiniMax API. The other boundary should either be clamped or raise a clear ValueError at validation time.
Minor: the blank line between _register_minimax_models() definition and its module-level call is missing (line 179), which is inconsistent with PEP 8 style used elsewhere in the repo.
Summary
Add MiniMax as a first-class LLM provider in Langfun, following the existing patterns used by DeepSeek and Groq providers.
Changes
langfun/core/llms/minimax.py— New provider module:MiniMaxModelInfoextendinglf.ModelInfowith provider metadata and linksMiniMaxbase class extendingOpenAIChatCompletionAPIwith MiniMax-specific configurationMiniMaxM27andMiniMaxM27Highspeedconvenience classes for MiniMax-M2.7 and MiniMax-M2.7-highspeed modelsapi_keyparameter orMINIMAX_API_KEYenvironment variablelf.LanguageModel.register()forLanguageModel.get()supportlangfun/core/llms/minimax_test.py— 10 unit tests covering:LanguageModel.get()registrationlangfun/core/llms/__init__.py— Added MiniMax exportsREADME.md— Added MiniMax to the list of supported LLMsSupported Models
MiniMax-M2.7MiniMax-M2.7-highspeedUsage
Test Results
All 10 unit tests pass. Existing provider tests (DeepSeek, Groq) continue to pass without regressions.
4 files changed, 288 additions, 1 deletion