Skip to content

feat: add MiniMax provider support with M2.7 default model#950

Open
octo-patch wants to merge 4 commits intovoideditor:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support with M2.7 default model#950
octo-patch wants to merge 4 commits intovoideditor:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 15, 2026

Summary

Add MiniMax as a new LLM provider via OpenAI-compatible API, with the latest M2.7 as the default model.

Changes

  • Add MiniMax provider with OpenAI-compatible API integration
  • Include MiniMax-M2.7 and MiniMax-M2.7-highspeed as primary models
  • Retain MiniMax-M2.5 and MiniMax-M2.5-highspeed as alternatives
  • Configure model capabilities (204K context, pricing, tool support)
  • Add provider settings (API key, help links)

Models

Model Context Input Output
MiniMax-M2.7 (default) 204K $0.30/M $1.20/M
MiniMax-M2.7-highspeed 204K $0.60/M $2.40/M
MiniMax-M2.5 204K $0.30/M $1.20/M
MiniMax-M2.5-highspeed 204K $0.60/M $2.40/M

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities.

Testing

  • Model capabilities correctly defined with proper typing
  • Fallback handler routes unknown MiniMax models to M2.7 default
  • OpenAI SDK integration with MiniMax base URL verified

octo-patch and others added 4 commits March 15, 2026 21:11
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model
- Keep all previous models as alternatives
@octo-patch octo-patch changed the title feat: add MiniMax provider support feat: add MiniMax provider support with M2.7 default model Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant