Skip to content

Add MiniMax model support#155

Open
ximiximi423 wants to merge 2 commits intoVectifyAI:mainfrom
ximiximi423:feature/add-minimax-support
Open

Add MiniMax model support#155
ximiximi423 wants to merge 2 commits intoVectifyAI:mainfrom
ximiximi423:feature/add-minimax-support

Conversation

@ximiximi423
Copy link

Summary

  • Add MiniMax model support via their OpenAI-compatible API (https://api.minimaxi.com/v1)
  • Auto-detect MiniMax models by name prefix and route to the correct API endpoint
  • Fix tiktoken fallback to cl100k_base for non-OpenAI models
  • Update README with MiniMax API key setup and model options

Usage

# Set MiniMax API key in .env
MINIMAX_API_KEY=your_minimax_key_here

# Run with MiniMax model
python3 run_pageindex.py --pdf_path doc.pdf --model MiniMax-M2.5

Supported MiniMax models: MiniMax-M1, MiniMax-M2.5, MiniMax-M2.5-highspeed, etc.

Design

Since MiniMax provides an OpenAI-compatible API, the implementation reuses the existing OpenAI SDK client by passing a custom base_url. A helper function _get_client_kwargs() selects the right credentials and endpoint based on the model name. No new dependencies required.

Test plan

  • Verified imports and helper functions work correctly
  • Verified backward compatibility with OpenAI models (no behavior change)
  • Verified token counting fallback for non-OpenAI model names

Relates to #150

🤖 Generated with Claude Code

ximiximi423 and others added 2 commits March 9, 2026 15:20
MiniMax provides an OpenAI-compatible API, so we route requests to
their endpoint (https://api.minimaxi.com/v1) when the model name
starts with "minimax". This keeps the existing OpenAI flow unchanged.

Changes:
- Add MINIMAX_API_KEY env var and base_url routing in utils.py
- Fix tiktoken fallback for non-OpenAI models (cl100k_base)
- Update README with MiniMax API key setup and model options

Usage: python3 run_pageindex.py --pdf_path doc.pdf --model MiniMax-M2.5

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add explicit --provider flag (openai/minimax) instead of auto-detecting by model name
- Add --api-base-url option for custom API endpoints
- Update MiniMax base URL to https://api.minimax.io/v1
- Enhance README with detailed provider documentation and usage examples
- Support environment variables LLM_PROVIDER and API_BASE_URL

This gives users more control over LLM routing and makes it easier to
use custom endpoints or switch providers without changing model names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant