-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
Feature Request: Custom OpenAI-Compatible API Endpoint Support
Problem
Currently, codebase-context only supports the official OpenAI API endpoint. This limits users who want to use:
- Ollama for local LLM inference
- LiteLLM Proxy for unified model access
- Groq, OpenRouter, or other OpenAI-compatible providers
- Self-hosted models (e.g., vLLM, text-generation-inference)
Proposed Solution
Add support for the OPENAI_BASE_URL environment variable (or similar configuration) to allow users to specify a custom base URL for OpenAI-compatible API endpoints.
Expected Behavior
# Use Ollama locally
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=ollama # Ollama doesn't require a real key
# Or use a LiteLLM proxy
export OPENAI_BASE_URL=http://localhost:8000
export OPENAI_API_KEY=sk-...
codebase-context index /path/to/project
codebase-context ask "Explain this codebase"Benefits
- Privacy: Keep code analysis local with Ollama
- Cost savings: Use local models instead of paid APIs
- Flexibility: Use any OpenAI-compatible endpoint
- Offline capability: Work without internet connectivity
Additional Context
I originally reported this as part of #68 (mutex lock failure on Intel Mac), but creating a separate issue to track this feature independently.
Technical Notes
- The codebase appears to use
@langchain/openaiwhich supportsbaseURLparameter - This should be a relatively small change to pass through the environment variable
- Would also need to document this in the README
Environment
- OS: macOS Sequoia 15.2 (Intel)
- Node: v24.12.0
- codebase-context: latest
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels