Skip to content

feat: add MiniMax as first-class LLM provider#2322

Open
octo-patch wants to merge 1 commit intologancyang:masterfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#2322
octo-patch wants to merge 1 commit intologancyang:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax AI as a built-in LLM provider alongside existing providers like OpenAI, DeepSeek, SiliconFlow, etc.

MiniMax provides OpenAI-compatible API endpoints, making integration straightforward using ChatOpenAI from LangChain — following the same pattern as DeepSeek and SiliconFlow.

Models added:

  • MiniMax-M2.7 — latest flagship model with up to 1M context window
  • MiniMax-M2.5 — cost-effective model with 204K context

Changes:

  • src/constants.ts — Add MINIMAX to ChatModelProviders enum, ChatModels enum, BUILTIN_CHAT_MODELS, ProviderInfo metadata, ProviderSettingsKeyMap, and DEFAULT_SETTINGS
  • src/settings/model.ts — Add minimaxApiKey to CopilotSettings interface
  • src/LLMProviders/chatModelManager.ts — Add MiniMax constructor mapping (ChatOpenAI), API key getter, provider config with baseURL routing, and topP/frequencyPenalty support
  • src/settings/providerModels.ts — Add MiniMaxModelResponse type, MiniMaxModel interface, ProviderResponseMap entry, and model list adapter
  • src/LLMProviders/minimax.test.ts — 20 unit tests covering provider registration, built-in models, provider info, settings mapping, and model adapter
  • src/integration_tests/minimax.test.ts — 3 integration tests covering chat completion, model variants, and streaming

Test plan

  • npm run build passes (TypeScript check + esbuild)
  • npm run test -- --testPathPattern=minimax — 20/20 unit tests pass
  • Prettier formatting verified
  • Integration tests require MINIMAX_API_KEY in .env.test

How users enable MiniMax

  1. Get an API key from MiniMax Platform
  2. Enter the key in Copilot Settings → API Keys → MiniMax
  3. Enable MiniMax-M2.7 or MiniMax-M2.5 from the model list
  4. Users can also import models dynamically via the model list endpoint

Add MiniMax AI (https://www.minimaxi.com) as a built-in LLM provider with
OpenAI-compatible API integration. MiniMax offers M2.7 and M2.5 models with
up to 1M context window.

Changes:
- Add MINIMAX to ChatModelProviders enum with M2.7 and M2.5 built-in models
- Add provider metadata (API host, key management URL, model list endpoint)
- Add ChatOpenAI-based provider config with baseURL routing
- Add minimaxApiKey to CopilotSettings and DEFAULT_SETTINGS
- Add MiniMax model response types and adapter in providerModels.ts
- Add topP and frequencyPenalty support
- Add 20 unit tests and 3 integration tests
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3a37bec0f4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/constants.ts
Comment on lines +439 to +443
name: ChatModels.MINIMAX_M2_7,
provider: ChatModelProviders.MINIMAX,
enabled: false,
isBuiltIn: true,
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Add MiniMax built-ins to existing users' active model list

On upgrade, these new built-ins never reach persisted settings. setSettings() rehydrates through mergeActiveModels() in src/settings/model.ts, and that function only seeds the map with builtInModels.filter(model => model.core) before merging the saved activeModels. Because this new MiniMax entry (and the adjacent M2.5 entry) is non-core and disabled, any existing vault that already has an activeModels array will never see MiniMax in Model Settings, so the advertised “enable MiniMax-M2.7/M2.5” path only works on fresh installs unless the user manually imports the models.

Useful? React with 👍 / 👎.

Comment on lines +403 to +407
[ChatModelProviders.MINIMAX]: {
modelName: modelName,
apiKey: await getDecryptedKey(customModel.apiKey || settings.minimaxApiKey),
configuration: {
baseURL: customModel.baseUrl || ProviderInfo[ChatModelProviders.MINIMAX].host,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Clamp MiniMax temperature to the backend's accepted range

MiniMax's OpenAI-compatible API only accepts temperature in (0,1], but this provider is wired through the generic ChatOpenAI path that forwards the shared Copilot temperature unchanged via getTemperatureForModel()/baseConfig. Since the UI currently allows 0..2, users who already run with temperature=0 or >1 on another provider will start getting request failures as soon as they switch to MiniMax, because nothing here validates or clamps the value for that backend.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant