Skip to content

feat: add MiniMax as first-class LLM provider (M2.7 default)#488

Open
octo-patch wants to merge 2 commits intoAsyncFuncAI:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider (M2.7 default)#488
octo-patch wants to merge 2 commits intoAsyncFuncAI:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 17, 2026

Summary

  • Add MiniMax as a first-class LLM provider via MiniMaxClient extending OpenAIClient
  • Default model: MiniMax-M2.7 (latest flagship with enhanced reasoning)
  • Also supports MiniMax-M2.5 and MiniMax-M2.5-highspeed (204K context)
  • Temperature clamping (0.01-1.0) and response_format removal for API compatibility
  • Full test coverage: 20 unit tests + 4 integration tests

Files Changed

  • api/minimax_client.py - MiniMaxClient with OpenAI-compat defaults
  • api/config.py - Provider registration
  • api/config/generator.json - Model configuration (M2.7 default)
  • README.md - Provider documentation
  • tests/unit/test_minimax_client.py - 20 unit tests
  • tests/integration/test_minimax_integration.py - 4 integration tests

Test Plan

  • Unit tests pass (20/20)
  • Integration tests pass with real MiniMax API (4/4)
  • M2.7, M2.5, M2.5-highspeed all verified working
  • Temperature clamping verified

Add MiniMax (MiniMax-M2.5, MiniMax-M2.5-highspeed) as a supported LLM
provider via the OpenAI-compatible API endpoint.

Changes:
- api/minimax_client.py: MiniMaxClient extending OpenAIClient with
  temperature clamping (0,1] and response_format removal
- api/config.py: Register MiniMaxClient in CLIENT_CLASSES and provider map
- api/config/generator.json: Add minimax provider with model definitions
- README.md: Document MiniMax provider and MINIMAX_API_KEY env var
- tests/unit/test_minimax_client.py: 20 unit tests covering init,
  temperature clamping, response_format, messages, and config integration
- tests/integration/test_minimax_integration.py: 3 integration tests
  with real API calls (skipped without MINIMAX_API_KEY)
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the platform's capabilities by integrating MiniMax as a new first-class Large Language Model provider. This integration allows users to leverage MiniMax's M2.5 and M2.5-highspeed models, known for their large context windows, within the existing framework. The changes ensure seamless compatibility by handling MiniMax-specific API requirements, such as temperature constraints and unsupported parameters, and include thorough testing to guarantee reliability.

Highlights

  • New LLM Provider Integration: Added MiniMax as a first-class LLM provider, leveraging its OpenAI-compatible API endpoint.
  • Model Support: Enabled support for MiniMax-M2.5 and MiniMax-M2.5-highspeed models, both offering a 204K context window.
  • API Parameter Handling: Implemented automatic temperature clamping to the (0.0, 1.0] range, as MiniMax rejects a temperature of 0. Also, ensured automatic removal of the unsupported response_format parameter.
  • Core Logic: Introduced api/minimax_client.py to encapsulate MiniMax-specific client logic, extending the existing OpenAIClient.
  • Configuration Updates: Updated api/config.py to register the new MiniMaxClient and api/config/generator.json to define MiniMax provider settings and model parameters.
  • Documentation: Updated README.md to include instructions for setting up the MiniMax provider and its MINIMAX_API_KEY environment variable.
  • Testing: Added comprehensive unit tests (tests/unit/test_minimax_client.py) and integration tests (tests/integration/test_minimax_integration.py) to validate the MiniMax client's functionality and integration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • README.md
    • Added MiniMax to the list of supported LLM providers.
    • Documented the MINIMAX_API_KEY environment variable.
    • Updated the generator.json description to include MiniMax.
    • Added MINIMAX_API_KEY to the environment variable table and Docker run command example.
  • api/config.py
    • Imported MiniMaxClient.
    • Added MINIMAX_API_KEY to environment variable loading.
    • Registered MiniMaxClient in CLIENT_CLASSES.
    • Included "minimax" in the default_map for provider fallback.
  • api/config/generator.json
    • Added a new provider configuration for "minimax", specifying MiniMaxClient as the client class, MiniMax-M2.5 as the default model, and defining MiniMax-M2.5 and MiniMax-M2.5-highspeed models with a default temperature of 1.0.
  • api/minimax_client.py
    • Created a new file defining MiniMaxClient, which extends OpenAIClient.
    • Implemented convert_inputs_to_api_kwargs to handle MiniMax-specific constraints: clamping temperature to (0.0, 1.0] and removing the response_format parameter.
    • Set default base URL and environment variable names for MiniMax.
  • tests/integration/test_minimax_integration.py
    • Created a new file containing integration tests for MiniMaxClient.
    • Included tests for basic chat completion with MiniMax-M2.5, usage of the MiniMax-M2.5-highspeed model, and verification of temperature clamping.
    • Tests are skipped if MINIMAX_API_KEY is not set.
  • tests/unit/test_minimax_client.py
    • Created a new file containing unit tests for MiniMaxClient.
    • Covered initialization, temperature clamping logic (zero, negative, above max, valid, and no temperature), response_format removal, message conversion, and integration with the api.config system.
Activity
  • Unit tests passed (20 tests).
  • Integration tests passed with a real MiniMax API key (3 tests).
  • The author has verified that unit tests pass.
  • The author has verified that integration tests pass with a real MiniMax API key.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adds MiniMax as a first-class LLM provider. The implementation is well-structured, extending the existing OpenAI client and including necessary configuration updates and comprehensive unit and integration tests. I've identified one critical issue in an integration test involving a hardcoded path that must be fixed. Additionally, I have a couple of medium-severity suggestions to improve code maintainability by removing magic numbers and simplifying test logic. Overall, great work on expanding the provider support.

Comment on lines +15 to +17
env_path2 = os.path.expanduser("/home/ximi/github_pr/.env.local")
if os.path.exists(env_path2):
load_dotenv(env_path2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This test file contains a hardcoded absolute path /home/ximi/github_pr/.env.local. This will cause the tests to fail on any other machine or in a CI environment. This path should be removed. Loading environment variables should rely on standard mechanisms like a .env file in the project root or variables set in the execution environment.

temp = final_kwargs.get("temperature")
if temp is not None:
if temp <= 0:
final_kwargs["temperature"] = 0.01
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The value 0.01 is a magic number. It would be better to define it as a module-level constant, for example MINIMAX_MIN_TEMPERATURE = 0.01, to improve readability and maintainability. The same applies to the maximum temperature 1.0 on line 86.

Comment on lines +46 to +51
with patch.dict(os.environ, {}, clear=True):
# Remove all potential API key sources
env_clean = {k: v for k, v in os.environ.items() if "MINIMAX" not in k}
with patch.dict(os.environ, env_clean, clear=True):
with pytest.raises(ValueError, match="MINIMAX_API_KEY"):
MiniMaxClient()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The nested patch.dict context managers are redundant and make the test harder to read. The outer with patch.dict(os.environ, {}, clear=True): already clears the environment variables, so the inner logic is not needed. You can simplify this test to use a single context manager.

Suggested change
with patch.dict(os.environ, {}, clear=True):
# Remove all potential API key sources
env_clean = {k: v for k, v in os.environ.items() if "MINIMAX" not in k}
with patch.dict(os.environ, env_clean, clear=True):
with pytest.raises(ValueError, match="MINIMAX_API_KEY"):
MiniMaxClient()
with patch.dict(os.environ, {}, clear=True):
with pytest.raises(ValueError, match="MINIMAX_API_KEY"):
MiniMaxClient()

Add MiniMax-M2.7 as the new default model while keeping M2.5 and
M2.5-highspeed as alternatives. Update config, tests, and docs.
@octo-patch octo-patch changed the title feat: add MiniMax as first-class LLM provider feat: add MiniMax as first-class LLM provider (M2.7 default) Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant