Skip to content

feat: add MiniMax as first-class LLM provider#619

Open
octo-patch wants to merge 1 commit intoValueCell-ai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#619
octo-patch wants to merge 1 commit intoValueCell-ai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider in ValueCell, supporting MiniMax-M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models (all with 204K context length) via the OpenAI-compatible API.

Changes

Backend (6 files)

  • MiniMaxProvider class in factory.py — extends ModelProvider, uses agno's OpenAILike with temperature clamping to (0.0, 1.0] range
  • Registered in ModelFactory._providers registry
  • Provider YAML config (minimax.yaml) with 4 models and connection settings
  • config.yaml registration with MINIMAX_API_KEY env var
  • JSON mode detection for minimax.io base URL in model.py
  • API key URL mapping in models router

Frontend (5 files)

  • MiniMax provider icon
  • Icon registration in icons.ts
  • i18n strings for EN, ZH_CN, ZH_TW, JA locales

Documentation (5 files)

  • All 4 README files (EN/ZH/ZH_Hant/JA) updated with MiniMax in provider list
  • CONFIGURATION_GUIDE.md provider table updated

Tests (2 files, 34 tests)

  • 28 unit tests: provider init, model creation, temperature clamping, parameter merging, embedding rejection, factory registration, JSON mode detection, YAML validation, config registration
  • 6 integration tests: ConfigManager loading, validation, factory end-to-end model creation, enabled providers list, available models

Configuration

# Set API key
export MINIMAX_API_KEY=your-key-here

# Optionally set as primary provider
export PRIMARY_PROVIDER=minimax

Models

Model Context Description
MiniMax-M2.7 204K Latest flagship model
MiniMax-M2.7-highspeed 204K Faster variant
MiniMax-M2.5 204K Previous generation
MiniMax-M2.5-highspeed 204K Faster variant

Test Plan

  • All 34 unit + integration tests pass (pytest valuecell/adapters/models/tests/test_minimax_provider.py)
  • Manual test: configure MINIMAX_API_KEY and verify model creation
  • Manual test: verify provider appears in Settings UI

20 files changed, 1195 insertions(+), 9 deletions(-)

Add MiniMax AI as a dedicated model provider with full integration across
the backend, frontend, and configuration system.

Backend changes:
- MiniMaxProvider class in factory.py using agno's OpenAILike with
  temperature clamping to (0.0, 1.0] range
- Provider registered in ModelFactory._providers registry
- MiniMax YAML config with M2.7, M2.7-highspeed, M2.5, M2.5-highspeed
  models (all 204K context)
- config.yaml registration with MINIMAX_API_KEY env var
- JSON mode detection for minimax.io base_url in model.py
- API key URL mapping in models router

Frontend changes:
- MiniMax provider icon in model-providers/
- Icon registration in icons.ts constants
- i18n strings for en, zh_CN, zh_TW, ja locales

Documentation:
- README (EN/ZH/ZH_Hant/JA) provider list updated
- CONFIGURATION_GUIDE.md provider table updated

Tests: 34 tests (28 unit + 6 integration), all passing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant