Skip to content

Feature Request: Support custom OpenAI-compatible API endpoints (OPENAI_BASE_URL) #70

@rothnic

Description

@rothnic

Feature Request: Custom OpenAI-Compatible API Endpoint Support

Problem

Currently, codebase-context only supports the official OpenAI API endpoint. This limits users who want to use:

  • Ollama for local LLM inference
  • LiteLLM Proxy for unified model access
  • Groq, OpenRouter, or other OpenAI-compatible providers
  • Self-hosted models (e.g., vLLM, text-generation-inference)

Proposed Solution

Add support for the OPENAI_BASE_URL environment variable (or similar configuration) to allow users to specify a custom base URL for OpenAI-compatible API endpoints.

Expected Behavior

# Use Ollama locally
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=ollama  # Ollama doesn't require a real key

# Or use a LiteLLM proxy
export OPENAI_BASE_URL=http://localhost:8000
export OPENAI_API_KEY=sk-...

codebase-context index /path/to/project
codebase-context ask "Explain this codebase"

Benefits

  1. Privacy: Keep code analysis local with Ollama
  2. Cost savings: Use local models instead of paid APIs
  3. Flexibility: Use any OpenAI-compatible endpoint
  4. Offline capability: Work without internet connectivity

Additional Context

I originally reported this as part of #68 (mutex lock failure on Intel Mac), but creating a separate issue to track this feature independently.

Technical Notes

  • The codebase appears to use @langchain/openai which supports baseURL parameter
  • This should be a relatively small change to pass through the environment variable
  • Would also need to document this in the README

Environment

  • OS: macOS Sequoia 15.2 (Intel)
  • Node: v24.12.0
  • codebase-context: latest

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions