Skip to content

feat: support non-Gemini LLM providers via LiteLLM model prefixes (#7)#16

Open
mvanhorn wants to merge 1 commit into
AsyncFuncAI:mainfrom
mvanhorn:feat/7-asyncreview-7-multi-provider-llm
Open

feat: support non-Gemini LLM providers via LiteLLM model prefixes (#7)#16
mvanhorn wants to merge 1 commit into
AsyncFuncAI:mainfrom
mvanhorn:feat/7-asyncreview-7-multi-provider-llm

Conversation

@mvanhorn
Copy link
Copy Markdown

What

Removes the hardcoded gemini/ prefix from the model path so any LiteLLM-compatible provider works (OpenAI, Anthropic, Ollama, OpenRouter, Groq, ...). dspy.LM is already a LiteLLM wrapper - it routes by prefix and reads the matching provider env var (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) automatically. The previous code force-prepended gemini/ to any user-supplied model string, which blocked every non-Gemini provider.

Why

Closes #7 ("Support non-Gemini LLM providers"). Same change also addresses:

How

Heuristic backward-compat (Option 2 from the design note): a small _normalize_model_name() helper only adds the gemini/ prefix when the user passed a bare gemini-* model name. Fully-qualified strings like openai/gpt-4o, anthropic/claude-3-5-sonnet, ollama_chat/qwen3:4b pass through verbatim. Existing users running with gemini-3-pro-preview (no prefix) see zero behavior change.

Touched files:

  • cli/virtual_runner.py - prefix logic moved into _normalize_model_name
  • cr/config.py - documents that LiteLLM picks up provider env vars
  • cli/main.py - --model help text shows examples for openai/anthropic/ollama
  • README.md / INSTALLATION.md - new "Using non-Gemini providers" section
  • npx/src/api-key.ts - detect any of GEMINI_API_KEY / OPENAI_API_KEY / ANTHROPIC_API_KEY; skip the prompt entirely for ollama_chat/ models (no key needed)
  • npx/src/index.ts / cli.ts / python-runner.ts - thread the model into the API-key check
  • npx/python/ parallel tree - mirror of the root-tree changes (since this tree is a duplicate of the root for the npx CLI shim)
  • tests/test_integration.py - skip-when-no-LLM-env-var instead of skip-when-no-Gemini
  • npx/python/tests/test_e2e_virtual_runner.py - provider-neutral fixture

GEMINI_API_KEY is kept as a backward-compat path. Defaults (MAIN_MODEL, SUB_MODEL) still point at Gemini.

Testing

  • ruff check . clean
  • mypy cr/ clean
  • pytest tests/ -v passes locally (integration test skips by default without an LLM key, same as before)
  • Manual smoke test with non-Gemini providers - happy to verify in a follow-up if you want me to run one specifically before merge

Checklist

  • Code follows project style
  • Self-reviewed
  • No breaking changes (heuristic prefix preserves the bare-Gemini path)

Note on the npx/python/ parallel tree: this PR mirrors changes to both copies manually. The duplication is an existing pattern in the repo - happy to do a follow-up that refactors them into a shared package if that's the direction you want to go.

Fixes #7, addresses #10 and #2

…yncFuncAI#7)

DSPy.LM is a thin wrapper around LiteLLM, which already routes by
provider prefix (gemini/, openai/, anthropic/, ollama_chat/, groq/,
etc.) and reads the matching provider env var automatically. The
previous code force-prepended gemini/ to any user-supplied model
string, blocking every non-Gemini provider.

Stop force-prepending. Use a small _normalize_model_name() helper
that only adds the gemini/ prefix when the user passed a bare
gemini-* model name, so existing GEMINI_API_KEY users see no
behavior change.

- cli/virtual_runner.py: prefix logic moved into _normalize_model_name
- cr/config.py: doc-comment that LiteLLM picks up provider env vars
- cli/main.py: --model help shows examples for openai/anthropic/ollama
- README.md / INSTALLATION.md: short "Using non-Gemini providers" section
- npx/src/api-key.ts: detect any of GEMINI_API_KEY / OPENAI_API_KEY /
  ANTHROPIC_API_KEY; skip the prompt entirely for ollama_chat/ models
- npx/python/ parallel tree: mirror of the root-tree changes
- tests/test_integration.py: skip-when-no-LLM-env-var (not just Gemini)
- npx/python/tests/test_e2e_virtual_runner.py: provider-neutral fixture

Closes AsyncFuncAI#7. Same change addresses AsyncFuncAI#10 (local LLMs) and AsyncFuncAI#2 (other models)
because the underlying fix is the prefix mechanism, not provider-specific.

Backward compat: bare "gemini-3-pro-preview" still works because of the
gemini-* heuristic. Users who pass a fully-qualified LiteLLM string
(openai/gpt-4o, anthropic/claude-3-5-sonnet, ollama_chat/qwen3:4b)
now reach the right provider.
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request expands support for non-Gemini LLM providers (OpenAI, Anthropic, Ollama) by integrating LiteLLM/DSPy model prefixes across the CLI, configuration, and test suites. Key changes include updated documentation, normalized model naming, and flexible API key handling. Feedback focuses on refining the API key prompting logic to prevent regressions when multiple provider keys are present and ensuring E2E tests correctly match the API key to the selected test model.

Comment thread npx/src/api-key.ts
Comment on lines +21 to 27
if (LLM_API_KEY_VARS.some((name) => process.env[name])) {
return undefined;
}

if (usesLocalProvider(model)) {
return undefined;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic for skipping the API key prompt is too broad and introduces a regression. If a user has OPENAI_API_KEY set in their environment but intends to use the default Gemini model (without having GEMINI_API_KEY set), they will not be prompted for a key, and the review will fail later. The check should be specific to the selected model's provider.

    // 2. Check if the environment already has the key for the selected model
    const isGemini = !model || model.startsWith('gemini-') || model.startsWith('gemini/');
    if (isGemini && process.env.GEMINI_API_KEY) {
        return undefined;
    }

    if (!isGemini) {
        const provider = model?.split('/')[0];
        const envVar = provider ? `${provider.toUpperCase()}_API_KEY` : null;
        if ((envVar && process.env[envVar]) || usesLocalProvider(model)) {
            return undefined;
        }
        // For non-Gemini models without an environment variable, we don't prompt for a Gemini key
        return undefined;
    }

Comment on lines +26 to +32
def llm_api_key():
"""Ensure an LLM provider API key is set for E2E tests."""
for name in LLM_API_KEY_VARS:
key = os.getenv(name)
if key:
return key
pytest.skip("No LLM API key set, skipping E2E tests")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The llm_api_key fixture returns the first key found in LLM_API_KEY_VARS. However, TEST_MODEL defaults to a Gemini model. If a user has OPENAI_API_KEY set but not GEMINI_API_KEY, the test will attempt to run a Gemini model with an OpenAI key, leading to failure. The fixture should return the key that corresponds to the TEST_MODEL.

def llm_api_key():
    """Ensure the correct LLM provider API key is set for E2E tests based on TEST_MODEL."""
    provider = TEST_MODEL.split('/')[0] if '/' in TEST_MODEL else 'gemini'
    if TEST_MODEL.startswith('gemini-'):
        provider = 'gemini'
    
    env_var = f"{provider.upper()}_API_KEY"
    key = os.getenv(env_var)
    if key:
        return key
        
    pytest.skip(f"{env_var} not set, skipping E2E tests for {TEST_MODEL}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support non-Gemini LLM providers (OpenAI-compatible, Anthropic, etc.)

1 participant