Skip to content

chandan11248/claude-code-zen-proxy

Repository files navigation

Claude Code Zen Proxy

Anthropic-compatible local proxy for running Claude Code through OpenCode Zen or another OpenAI-compatible chat/completions endpoint.

The proxy accepts Claude Code's Anthropic-style messages API, translates requests into OpenAI-compatible chat completions, and maps responses back into Anthropic-style messages, streaming events, tool calls, and thinking blocks.

Screenshots

Proxy health endpoint

Automated test run

Why This Exists

Claude Code expects Anthropic-compatible endpoints. OpenCode Zen exposes an OpenAI-compatible API. This project sits between them so Claude Code can use Zen-backed models while keeping Claude Code's local configuration, tool-use flow, and streaming behavior intact.

Features

  • Anthropic-compatible POST /v1/messages
  • Anthropic-compatible POST /v1/messages/count_tokens
  • Anthropic-compatible GET /v1/models and GET /v1/models/:id
  • OpenAI-compatible upstream chat/completions forwarding
  • Streaming SSE translation back to Anthropic events
  • Tool definition, tool call, and tool result translation
  • DeepSeek thinking and reasoning effort defaults
  • Lightweight Node.js runtime with built-in node:test coverage
  • Optional claude-zen wrapper for keeping this setup separate from your normal claude command

Architecture

flowchart LR
  A["Claude Code"] -->|"Anthropic messages API"| B["Local Zen Proxy"]
  B -->|"OpenAI chat/completions"| C["OpenCode Zen"]
  C -->|"OpenAI-style completion or stream"| B
  B -->|"Anthropic message or SSE events"| A
Loading

The main translation layer lives in src/anthropic-openai-proxy.js. The HTTP server and Anthropic-compatible routes live in src/server.js.

Requirements

  • Node.js 20 or newer
  • An OpenCode Zen API key or compatible upstream API key
  • Claude Code, if you want to use the proxy from the Claude CLI

Quick Start

Clone the repository and prepare your local environment:

cp .env.example .env.local

Edit .env.local with your upstream key:

UPSTREAM_API_KEY=your-opencode-key
UPSTREAM_MODEL=deepseek-v4-flash-free
UPSTREAM_CHAT_COMPLETIONS_URL=https://opencode.ai/zen/v1/chat/completions
ANTHROPIC_MODEL_ALIAS=claude-code-proxy
PROXY_API_KEY=choose-a-local-proxy-key
HOST=127.0.0.1
PORT=4040

Run the test suite:

npm test

Start the proxy:

./start-proxy.sh

Check the health endpoint:

curl -s -H 'x-api-key: choose-a-local-proxy-key' http://127.0.0.1:4040/health

Claude Code Configuration

Point Claude Code at the local proxy:

{
  "ANTHROPIC_BASE_URL": "http://127.0.0.1:4040",
  "ANTHROPIC_MODEL": "claude-code-proxy",
  "ANTHROPIC_API_KEY": "choose-a-local-proxy-key"
}

A minimal example is included in claude-code-settings.example.json.

Separate claude-zen Wrapper

The repository also includes a wrapper script for running Claude Code through a dedicated Zen proxy process:

  • claude-zen.sh
  • .env.zen
  • zen-claude-settings.json

The wrapper starts its own proxy, waits for the health check, runs Claude with the Zen-only settings file, and stops the proxy when Claude exits.

Prepare .env.zen, then run:

cp .env.zen.example .env.zen
# edit .env.zen and set UPSTREAM_API_KEY
./claude-zen.sh --print "Reply with exactly: zen proxy ok"

API Surface

Method Path Purpose
GET /health Local readiness and current upstream configuration
GET /v1/models Anthropic-style model list containing the local alias
GET /v1/models/:id Anthropic-style model metadata for the configured alias
POST /v1/messages Main Claude Code message endpoint
POST /v1/messages/count_tokens Local token estimate for Claude Code budgeting

Configuration

Variable Default Description
UPSTREAM_API_KEY empty API key for Zen or another compatible upstream
UPSTREAM_MODEL deepseek-v4-flash-free Upstream model ID sent to chat/completions
UPSTREAM_CHAT_COMPLETIONS_URL https://opencode.ai/zen/v1/chat/completions OpenAI-compatible upstream endpoint
ANTHROPIC_MODEL_ALIAS claude-code-proxy Local model name exposed to Claude Code
PROXY_API_KEY empty Optional local API key required by non-public routes
DEEPSEEK_THINKING_TYPE enabled DeepSeek thinking mode forwarded upstream
DEEPSEEK_REASONING_EFFORT max Default reasoning effort for DeepSeek-compatible requests
HOST 127.0.0.1 Local bind host
PORT 4040 Local bind port

Model Switching

For most Zen models, switching starts with changing UPSTREAM_MODEL:

UPSTREAM_MODEL=minimax-m2.5-free

Then restart the proxy and verify:

npm test
./start-proxy.sh

Not every upstream model supports the same tool-calling, reasoning, content block, or streaming behavior. Before using a new model heavily, test a plain prompt, a streaming prompt, a tool call, and a tool-result follow-up.

Detailed notes, resource links, and compatibility checks are documented in PROXY_RESOURCES_AND_MODEL_SWITCHING.md.

Verification

The project currently uses Node's built-in test runner:

npm test

The tests cover Anthropic-to-OpenAI request translation, OpenAI-to-Anthropic response translation, thinking preservation, effort mapping, token estimation, and streaming SSE conversion.

Security Notes

  • Keep .env.local and .env.zen out of git.
  • Use a local PROXY_API_KEY if anything besides your own machine can reach the proxy.
  • count_tokens is an estimate and does not call the upstream tokenizer.
  • Proxy-generated thinking signatures use proxy-unverified so thinking state can survive tool turns, but they are not upstream provider signatures.
  • Non-text multimodal content is not translated yet.

Keywords

Claude Code, Anthropic API, OpenAI-compatible API, OpenCode Zen, DeepSeek, model switching, tool calling, SSE streaming, local proxy, Node.js.

About

Anthropic-compatible Claude Code proxy for OpenCode Zen and OpenAI-compatible chat completions models.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors