Skip to content

feat: add MiniMax as alternative LLM provider for prompt optimization#62

Open
octo-patch wants to merge 1 commit intozai-org:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider for prompt optimization#62
octo-patch wants to merge 1 commit intozai-org:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as an alternative LLM provider for prompt optimization alongside the existing Zhipu AI backend.

  • Provider preset system in prompt_optimize.py: --provider minimax|zhipu|openai for easy switching, with MINIMAX_API_KEY env var auto-detection
  • Gradio demo update: LLM Provider dropdown in the web UI, configurable via LLM_PROVIDER env var
  • MiniMax M2.7 model with temperature clamping for API compatibility
  • README/README_zh updated with MiniMax usage examples
  • 18 unit tests + 3 integration tests covering provider presets, temperature handling, message construction, and end-to-end API calls

Changed files (7 files, 444 additions)

File Change
inference/prompt_optimize.py Added PROVIDER_PRESETS dict, --provider arg, MINIMAX_API_KEY env var support
inference/gradio_web_demo.py Added provider dropdown, LLM_PROVIDER env var, imported PROVIDER_PRESETS
README.md Added MiniMax usage examples for prompt optimization
README_zh.md Added MiniMax usage examples (Chinese)
tests/__init__.py Test package init
tests/test_prompt_optimize.py 18 unit tests
tests/test_integration_minimax.py 3 integration tests

Test plan

  • All 18 unit tests pass (python -m unittest tests.test_prompt_optimize -v)
  • All 3 integration tests pass with real MiniMax API (MINIMAX_API_KEY=... python -m unittest tests.test_integration_minimax -v)
  • Existing prompt_optimize.py CLI behavior unchanged (defaults to zhipu provider)
  • Verify Gradio demo with LLM_PROVIDER=minimax (requires GPU for diffusion model)

This PR adds MiniMax as an alternative — the default behavior (Zhipu AI) is fully preserved and no existing functionality is changed.

Add PROVIDER_PRESETS system to prompt_optimize.py with support for
MiniMax (M2.7), Zhipu AI, and OpenAI as selectable LLM backends for
prompt enhancement. Update Gradio demo with provider dropdown and
LLM_PROVIDER env var. Includes 18 unit tests and 3 integration tests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant