feat: add MiniMax as first-class LLM provider (M2.7 default)#488
feat: add MiniMax as first-class LLM provider (M2.7 default)#488octo-patch wants to merge 2 commits intoAsyncFuncAI:mainfrom
Conversation
Add MiniMax (MiniMax-M2.5, MiniMax-M2.5-highspeed) as a supported LLM provider via the OpenAI-compatible API endpoint. Changes: - api/minimax_client.py: MiniMaxClient extending OpenAIClient with temperature clamping (0,1] and response_format removal - api/config.py: Register MiniMaxClient in CLIENT_CLASSES and provider map - api/config/generator.json: Add minimax provider with model definitions - README.md: Document MiniMax provider and MINIMAX_API_KEY env var - tests/unit/test_minimax_client.py: 20 unit tests covering init, temperature clamping, response_format, messages, and config integration - tests/integration/test_minimax_integration.py: 3 integration tests with real API calls (skipped without MINIMAX_API_KEY)
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the platform's capabilities by integrating MiniMax as a new first-class Large Language Model provider. This integration allows users to leverage MiniMax's M2.5 and M2.5-highspeed models, known for their large context windows, within the existing framework. The changes ensure seamless compatibility by handling MiniMax-specific API requirements, such as temperature constraints and unsupported parameters, and include thorough testing to guarantee reliability. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully adds MiniMax as a first-class LLM provider. The implementation is well-structured, extending the existing OpenAI client and including necessary configuration updates and comprehensive unit and integration tests. I've identified one critical issue in an integration test involving a hardcoded path that must be fixed. Additionally, I have a couple of medium-severity suggestions to improve code maintainability by removing magic numbers and simplifying test logic. Overall, great work on expanding the provider support.
| env_path2 = os.path.expanduser("/home/ximi/github_pr/.env.local") | ||
| if os.path.exists(env_path2): | ||
| load_dotenv(env_path2) |
There was a problem hiding this comment.
This test file contains a hardcoded absolute path /home/ximi/github_pr/.env.local. This will cause the tests to fail on any other machine or in a CI environment. This path should be removed. Loading environment variables should rely on standard mechanisms like a .env file in the project root or variables set in the execution environment.
| temp = final_kwargs.get("temperature") | ||
| if temp is not None: | ||
| if temp <= 0: | ||
| final_kwargs["temperature"] = 0.01 |
| with patch.dict(os.environ, {}, clear=True): | ||
| # Remove all potential API key sources | ||
| env_clean = {k: v for k, v in os.environ.items() if "MINIMAX" not in k} | ||
| with patch.dict(os.environ, env_clean, clear=True): | ||
| with pytest.raises(ValueError, match="MINIMAX_API_KEY"): | ||
| MiniMaxClient() |
There was a problem hiding this comment.
The nested patch.dict context managers are redundant and make the test harder to read. The outer with patch.dict(os.environ, {}, clear=True): already clears the environment variables, so the inner logic is not needed. You can simplify this test to use a single context manager.
| with patch.dict(os.environ, {}, clear=True): | |
| # Remove all potential API key sources | |
| env_clean = {k: v for k, v in os.environ.items() if "MINIMAX" not in k} | |
| with patch.dict(os.environ, env_clean, clear=True): | |
| with pytest.raises(ValueError, match="MINIMAX_API_KEY"): | |
| MiniMaxClient() | |
| with patch.dict(os.environ, {}, clear=True): | |
| with pytest.raises(ValueError, match="MINIMAX_API_KEY"): | |
| MiniMaxClient() |
Add MiniMax-M2.7 as the new default model while keeping M2.5 and M2.5-highspeed as alternatives. Update config, tests, and docs.
Summary
Files Changed
Test Plan