Skip to content

feat: add MiniMax as LLM provider for agent evaluation#22

Open
octo-patch wants to merge 1 commit intoxingyaoww:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider for agent evaluation#22
octo-patch wants to merge 1 commit intoxingyaoww:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as a first-class LLM provider for agent evaluation alongside OpenAI, Anthropic, and Google
  • Implement MiniMaxLMAgent and MiniMaxFeedbackAgent using MiniMax's OpenAI-compatible API (https://api.minimax.io/v1)
  • Support MiniMax-M2.7 and MiniMax-M2.7-highspeed models with automatic temperature clamping

Changes

File Description
mint/agents/minimax_lm_agent.py Primary LM agent extending OpenAILMAgent with MiniMax API endpoint, MINIMAX_API_KEY env var, and temperature clamping
mint/agents/minimax_feedback_agent.py Feedback agent for expert evaluation using MiniMax models
mint/agents/__init__.py Export new MiniMax agent classes
mint/configs/config_variables.py Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to EVALUATED_MODEL_LIST
README.md Add supported LLM providers table and MiniMax usage documentation
tests/ 45 tests: 27 unit + 6 integration + 3 live API + 9 framework tests

Usage

export MINIMAX_API_KEY="your-api-key"
# MiniMax models are already configured in EVALUATED_MODEL_LIST

Test plan

  • All 45 tests pass (including live API tests against MiniMax endpoint)
  • Dynamic agent instantiation works via getattr(agents, "MiniMaxLMAgent") pattern
  • Temperature clamping correctly maps 0.0 to 0.01 for MiniMax API compatibility
  • Token usage tracking works with OpenAIObject-to-int conversion
  • Feedback agent correctly handles textual and binary feedback forms

Add MiniMaxLMAgent and MiniMaxFeedbackAgent using MiniMax OpenAI-compatible API (https://api.minimax.io/v1). Supports MiniMax-M2.7 and MiniMax-M2.7-highspeed models with automatic temperature clamping and token usage tracking.

Changes:
- mint/agents/minimax_lm_agent.py: Primary agent extending OpenAILMAgent
- mint/agents/minimax_feedback_agent.py: Feedback agent for expert evaluation
- Updated EVALUATED_MODEL_LIST with MiniMax model configurations
- Updated README.md with provider table and MiniMax usage docs
- 45 tests (27 unit + 6 integration + 3 live API + 9 framework tests)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant