feat: add MiniMax as first-class LLM provider#2322
feat: add MiniMax as first-class LLM provider#2322octo-patch wants to merge 1 commit intologancyang:masterfrom
Conversation
Add MiniMax AI (https://www.minimaxi.com) as a built-in LLM provider with OpenAI-compatible API integration. MiniMax offers M2.7 and M2.5 models with up to 1M context window. Changes: - Add MINIMAX to ChatModelProviders enum with M2.7 and M2.5 built-in models - Add provider metadata (API host, key management URL, model list endpoint) - Add ChatOpenAI-based provider config with baseURL routing - Add minimaxApiKey to CopilotSettings and DEFAULT_SETTINGS - Add MiniMax model response types and adapter in providerModels.ts - Add topP and frequencyPenalty support - Add 20 unit tests and 3 integration tests
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3a37bec0f4
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| name: ChatModels.MINIMAX_M2_7, | ||
| provider: ChatModelProviders.MINIMAX, | ||
| enabled: false, | ||
| isBuiltIn: true, | ||
| }, |
There was a problem hiding this comment.
Add MiniMax built-ins to existing users' active model list
On upgrade, these new built-ins never reach persisted settings. setSettings() rehydrates through mergeActiveModels() in src/settings/model.ts, and that function only seeds the map with builtInModels.filter(model => model.core) before merging the saved activeModels. Because this new MiniMax entry (and the adjacent M2.5 entry) is non-core and disabled, any existing vault that already has an activeModels array will never see MiniMax in Model Settings, so the advertised “enable MiniMax-M2.7/M2.5” path only works on fresh installs unless the user manually imports the models.
Useful? React with 👍 / 👎.
| [ChatModelProviders.MINIMAX]: { | ||
| modelName: modelName, | ||
| apiKey: await getDecryptedKey(customModel.apiKey || settings.minimaxApiKey), | ||
| configuration: { | ||
| baseURL: customModel.baseUrl || ProviderInfo[ChatModelProviders.MINIMAX].host, |
There was a problem hiding this comment.
Clamp MiniMax temperature to the backend's accepted range
MiniMax's OpenAI-compatible API only accepts temperature in (0,1], but this provider is wired through the generic ChatOpenAI path that forwards the shared Copilot temperature unchanged via getTemperatureForModel()/baseConfig. Since the UI currently allows 0..2, users who already run with temperature=0 or >1 on another provider will start getting request failures as soon as they switch to MiniMax, because nothing here validates or clamps the value for that backend.
Useful? React with 👍 / 👎.
Summary
Add MiniMax AI as a built-in LLM provider alongside existing providers like OpenAI, DeepSeek, SiliconFlow, etc.
MiniMax provides OpenAI-compatible API endpoints, making integration straightforward using
ChatOpenAIfrom LangChain — following the same pattern as DeepSeek and SiliconFlow.Models added:
Changes:
src/constants.ts— AddMINIMAXtoChatModelProvidersenum,ChatModelsenum,BUILTIN_CHAT_MODELS,ProviderInfometadata,ProviderSettingsKeyMap, andDEFAULT_SETTINGSsrc/settings/model.ts— AddminimaxApiKeytoCopilotSettingsinterfacesrc/LLMProviders/chatModelManager.ts— Add MiniMax constructor mapping (ChatOpenAI), API key getter, provider config withbaseURLrouting, andtopP/frequencyPenaltysupportsrc/settings/providerModels.ts— AddMiniMaxModelResponsetype,MiniMaxModelinterface,ProviderResponseMapentry, and model list adaptersrc/LLMProviders/minimax.test.ts— 20 unit tests covering provider registration, built-in models, provider info, settings mapping, and model adaptersrc/integration_tests/minimax.test.ts— 3 integration tests covering chat completion, model variants, and streamingTest plan
npm run buildpasses (TypeScript check + esbuild)npm run test -- --testPathPattern=minimax— 20/20 unit tests passMINIMAX_API_KEYin.env.testHow users enable MiniMax