Skip to content

feat: upgrade MiniMax provider to OpenAI-compatible API with M2.7 models#60

Open
octo-patch wants to merge 1 commit intocodefuse-ai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: upgrade MiniMax provider to OpenAI-compatible API with M2.7 models#60
octo-patch wants to merge 1 commit intocodefuse-ai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Upgrade the existing MiniMax model worker from the legacy api.minimax.chat endpoint to the new OpenAI-compatible Chat Completions API at api.minimax.io/v1, with support for the latest MiniMax-M2.7 models.

Changes

  • Upgrade MiniMaxWorker to use OpenAI-compatible /v1/chat/completions endpoint
  • Update model: from legacy abab5.5-chat to MiniMax-M2.7 (204K context) and MiniMax-M2.7-highspeed
  • Standard message format: Use OpenAI-style role/content instead of legacy sender_type/text
  • Temperature clamping: Enforce MiniMax's valid range (0.0, 1.0]
  • Environment variable: Support MINIMAX_API_KEY for API authentication
  • Remove legacy dependencies: No more GroupId, no more chatcompletion_pro URL patterns
  • Config examples: Add MiniMax entry in model_config.py.example and server_config.py.example
  • README: Add MiniMax-M2.7 to the supported models table
  • Tests: 29 unit tests + 3 integration tests

Files Changed

File Description
examples/model_workers/minimax.py Upgraded worker implementation
configs/model_config.py.example Added minimax-api config entry
configs/server_config.py.example Added minimax-api worker port
README.md / README_en.md Added MiniMax-M2.7 to models table
tests/test_minimax_worker.py Unit + integration tests

API Reference

MiniMax Models

Model Context Description
MiniMax-M2.7 204K Peak Performance. Ultimate Value. Master the Complex
MiniMax-M2.7-highspeed 204K Same performance, faster and more agile

Configuration Example

ONLINE_LLM_MODEL = {
    "minimax-api": {
        "version": "MiniMax-M2.7",
        "api_base_url": "https://api.minimax.io/v1",
        "api_key": os.environ.get("MINIMAX_API_KEY", ""),
        "provider": "MiniMaxWorker",
    },
}

- Upgrade MiniMaxWorker from legacy api.minimax.chat to OpenAI-compatible
  /v1/chat/completions endpoint at api.minimax.io
- Update default model from abab5.5-chat to MiniMax-M2.7 (204K context)
- Support MiniMax-M2.7 and MiniMax-M2.7-highspeed models
- Use standard OpenAI message format (role/content) instead of legacy
  sender_type/text format
- Add temperature clamping to valid range (0.0, 1.0]
- Support MINIMAX_API_KEY environment variable
- Remove legacy GroupId dependency
- Add MiniMax config entry in model_config.py.example
- Add MiniMax worker port in server_config.py.example
- Add unit tests (29) and integration tests (3)
- Update README/README_en to list MiniMax-M2.7

API docs: https://platform.minimax.io/docs/api-reference/text-openai-api
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant