Anthropic-compatible local proxy for running Claude Code through OpenCode Zen or another OpenAI-compatible chat/completions endpoint.
The proxy accepts Claude Code's Anthropic-style messages API, translates requests into OpenAI-compatible chat completions, and maps responses back into Anthropic-style messages, streaming events, tool calls, and thinking blocks.
Claude Code expects Anthropic-compatible endpoints. OpenCode Zen exposes an OpenAI-compatible API. This project sits between them so Claude Code can use Zen-backed models while keeping Claude Code's local configuration, tool-use flow, and streaming behavior intact.
- Anthropic-compatible
POST /v1/messages - Anthropic-compatible
POST /v1/messages/count_tokens - Anthropic-compatible
GET /v1/modelsandGET /v1/models/:id - OpenAI-compatible upstream
chat/completionsforwarding - Streaming SSE translation back to Anthropic events
- Tool definition, tool call, and tool result translation
- DeepSeek thinking and reasoning effort defaults
- Lightweight Node.js runtime with built-in
node:testcoverage - Optional
claude-zenwrapper for keeping this setup separate from your normalclaudecommand
flowchart LR
A["Claude Code"] -->|"Anthropic messages API"| B["Local Zen Proxy"]
B -->|"OpenAI chat/completions"| C["OpenCode Zen"]
C -->|"OpenAI-style completion or stream"| B
B -->|"Anthropic message or SSE events"| A
The main translation layer lives in src/anthropic-openai-proxy.js. The HTTP server and Anthropic-compatible routes live in src/server.js.
- Node.js 20 or newer
- An OpenCode Zen API key or compatible upstream API key
- Claude Code, if you want to use the proxy from the Claude CLI
Clone the repository and prepare your local environment:
cp .env.example .env.localEdit .env.local with your upstream key:
UPSTREAM_API_KEY=your-opencode-key
UPSTREAM_MODEL=deepseek-v4-flash-free
UPSTREAM_CHAT_COMPLETIONS_URL=https://opencode.ai/zen/v1/chat/completions
ANTHROPIC_MODEL_ALIAS=claude-code-proxy
PROXY_API_KEY=choose-a-local-proxy-key
HOST=127.0.0.1
PORT=4040Run the test suite:
npm testStart the proxy:
./start-proxy.shCheck the health endpoint:
curl -s -H 'x-api-key: choose-a-local-proxy-key' http://127.0.0.1:4040/healthPoint Claude Code at the local proxy:
{
"ANTHROPIC_BASE_URL": "http://127.0.0.1:4040",
"ANTHROPIC_MODEL": "claude-code-proxy",
"ANTHROPIC_API_KEY": "choose-a-local-proxy-key"
}A minimal example is included in claude-code-settings.example.json.
The repository also includes a wrapper script for running Claude Code through a dedicated Zen proxy process:
claude-zen.sh.env.zenzen-claude-settings.json
The wrapper starts its own proxy, waits for the health check, runs Claude with the Zen-only settings file, and stops the proxy when Claude exits.
Prepare .env.zen, then run:
cp .env.zen.example .env.zen
# edit .env.zen and set UPSTREAM_API_KEY
./claude-zen.sh --print "Reply with exactly: zen proxy ok"| Method | Path | Purpose |
|---|---|---|
GET |
/health |
Local readiness and current upstream configuration |
GET |
/v1/models |
Anthropic-style model list containing the local alias |
GET |
/v1/models/:id |
Anthropic-style model metadata for the configured alias |
POST |
/v1/messages |
Main Claude Code message endpoint |
POST |
/v1/messages/count_tokens |
Local token estimate for Claude Code budgeting |
| Variable | Default | Description |
|---|---|---|
UPSTREAM_API_KEY |
empty | API key for Zen or another compatible upstream |
UPSTREAM_MODEL |
deepseek-v4-flash-free |
Upstream model ID sent to chat/completions |
UPSTREAM_CHAT_COMPLETIONS_URL |
https://opencode.ai/zen/v1/chat/completions |
OpenAI-compatible upstream endpoint |
ANTHROPIC_MODEL_ALIAS |
claude-code-proxy |
Local model name exposed to Claude Code |
PROXY_API_KEY |
empty | Optional local API key required by non-public routes |
DEEPSEEK_THINKING_TYPE |
enabled |
DeepSeek thinking mode forwarded upstream |
DEEPSEEK_REASONING_EFFORT |
max |
Default reasoning effort for DeepSeek-compatible requests |
HOST |
127.0.0.1 |
Local bind host |
PORT |
4040 |
Local bind port |
For most Zen models, switching starts with changing UPSTREAM_MODEL:
UPSTREAM_MODEL=minimax-m2.5-freeThen restart the proxy and verify:
npm test
./start-proxy.shNot every upstream model supports the same tool-calling, reasoning, content block, or streaming behavior. Before using a new model heavily, test a plain prompt, a streaming prompt, a tool call, and a tool-result follow-up.
Detailed notes, resource links, and compatibility checks are documented in PROXY_RESOURCES_AND_MODEL_SWITCHING.md.
The project currently uses Node's built-in test runner:
npm testThe tests cover Anthropic-to-OpenAI request translation, OpenAI-to-Anthropic response translation, thinking preservation, effort mapping, token estimation, and streaming SSE conversion.
- Keep
.env.localand.env.zenout of git. - Use a local
PROXY_API_KEYif anything besides your own machine can reach the proxy. count_tokensis an estimate and does not call the upstream tokenizer.- Proxy-generated thinking signatures use
proxy-unverifiedso thinking state can survive tool turns, but they are not upstream provider signatures. - Non-text multimodal content is not translated yet.
Claude Code, Anthropic API, OpenAI-compatible API, OpenCode Zen, DeepSeek, model switching, tool calling, SSE streaming, local proxy, Node.js.