A modular agent orchestration framework for coordinating AI-powered development workflows with GitHub Copilot, Jules, and OpenCode.
Heidi-CLI provides a flexible system for running AI-powered agent workflows with:
- Multi-Agent Orchestration - Coordinate Copilot SDK, Jules, and OpenCode agents
- Plan→Run→Audit Workflow - Strict workflow with durable artifacts
- Python SDK - Programmatic access via
client.pyandsdk.py - CLI Tooling - Full-featured command-line interface
- GitHub Copilot - AI-assisted code generation and conversation
- Jules - Google's coding agent
- OpenCode - Open source AI coding assistant
- Plan Phase - Define agent tasks and handoffs
- Run Phase - Execute agents with proper routing
- Audit Phase - Verify changes and run verifications
- Interactive CLI with rich formatting
- Configurable agent templates
- Persistent workspace state
- Secret redaction for security
- Config - Global config stored in OS-specific location (not project-local):
- Linux:
~/.config/heidi/(or$XDG_CONFIG_HOME/heidi) - macOS:
~/Library/Application Support/Heidi - Windows:
%APPDATA%/Heidi
- Linux:
- State - Optional state in OS-specific location
- Cache - Optional cache in OS-specific location
- Tasks (
./tasks/) - Task files (<slug>.md), audit files (<slug>.audit.md) - tracked in repo
.
├── heidi_cli/ # CLI tool for agent orchestration
├── client.py # Python client for agent interaction
├── sdk.py # GitHub Copilot SDK integration
└── .local/ # Local development files (ignored)
- Python 3.10+
- GitHub Copilot subscription (for Copilot features)
Linux/macOS:
bash -c "$(curl -fsSL https://raw.githubusercontent.com/heidi-dang/heidi-cli/main/install.sh)"Windows (PowerShell):
irm https://raw.githubusercontent.com/heidi-dang/heidi-cli/main/install.ps1 | iex# Install CLI from repo root
python -m pip install -e '.[dev]'# Run setup wizard (recommended for first-time users)
heidi setup
# Or initialize with defaults
heidi init
# Authenticate with GitHub
heidi auth gh
# Check Copilot status
heidi copilot status
# Chat with Copilot
heidi copilot chat "hello world"
# Run agent loop
heidi loop "fix failing tests" --executor copilot
# Start HTTP server for OpenWebUI integration
heidi serve
# Check OpenWebUI status
heidi openwebui status
# Get OpenWebUI setup guide
heidi openwebui guideHeidi includes a modern React-based web UI for interacting with agents through a chat interface.
Development Mode (separate ports):
# Terminal 1: Start the backend API server
heidi serve
# Terminal 2: Start the UI dev server (with hot reload)
heidi start ui
# Or manually: cd ui && npm run dev
# Access UI at http://localhost:3002
# Backend runs at http://localhost:7777Production Mode (single port):
# Build the UI for production
heidi ui build
# Start the backend server (serves UI at /ui/)
heidi serve
# Access the UI at http://localhost:7777/ui/UI Commands:
heidi ui build # Build the UI for production
heidi ui path # Show UI build path
heidi ui status # Check UI build statusConfiguration:
- UI dev server runs on port 3002 (Vite)
- Backend API runs on port 7777 (FastAPI)
- Vite proxy forwards API calls from :3002 → :7777 during development
- Production UI is served at
/ui/by the backend server - Set
HEIDI_UI_DISTenv var to override the UI dist directory
Production Deployment with Custom Domain:
When deploying with a reverse proxy (e.g., Cloudflare Tunnel, Nginx), the Vite dev server needs to trust your domain:
- The
vite.config.tsalready includesheidiai.com.auinallowedHosts - For other domains, set the
HEIDI_CORS_ORIGINSenvironment variable:export HEIDI_CORS_ORIGINS="https://your-domain.com,https://www.your-domain.com" heidi serve
- Or use the
--cors-originsflag when starting the server
Port Reference:
| Service | Port | Purpose |
|---|---|---|
| Vite Dev Server | 3002 | Development UI with hot reload |
| Heidi Backend | 7777 | API server + production UI |
| Command | Description |
|---|---|
heidi setup |
Interactive setup wizard for first-time users |
heidi init |
Initialize global config directory |
heidi paths |
Show config/state/cache paths |
heidi update |
Update UI to latest version |
heidi upgrade |
Upgrade Heidi CLI |
heidi uninstall |
Uninstall Heidi CLI |
heidi auth gh |
Authenticate with GitHub |
heidi doctor |
Check all dependencies |
heidi copilot status |
Show Copilot connection status |
heidi copilot chat <msg> |
Chat with Copilot |
heidi run "prompt" |
Single prompt execution |
heidi loop "task" |
Full Plan→Audit loop |
heidi runs |
List recent runs |
heidi config |
Manage configuration |
heidi serve |
Start HTTP server (port 7777) |
heidi start ui |
Start UI dev server (port 3002) |
heidi openwebui status |
Check OpenWebUI connectivity |
heidi openwebui guide |
Show OpenWebUI setup guide |
heidi openwebui configure |
Configure OpenWebUI settings |
heidi connect status |
Show connection status (Ollama, OpenCode) |
heidi connect ollama |
Connect to Ollama |
heidi connect opencode |
Connect to OpenCode CLI or server |
heidi connect disconnect |
Disconnect from a service |
The interactive setup wizard (heidi setup) guides you through:
- Environment Check - Verifies Python, Copilot SDK, and optional tools
- Heidi Initialization - Creates global config directory
- GitHub Authentication - Sets up GitHub token for Copilot access
- OpenWebUI Integration - Configures connection to OpenWebUI
- Final Summary - Shows setup status and next steps
Connect to external services like Ollama and OpenCode:
# Check connection status for all services
heidi connect status
heidi connect status --json
# Connect to Ollama (default: http://127.0.0.1:11434)
heidi connect ollama
heidi connect ollama --url http://localhost:11434 --token <token> --save
# Connect to OpenCode CLI
heidi connect opencode --mode local
# Connect to OpenCode server (default: http://127.0.0.1:4096)
heidi connect opencode --mode server --url http://127.0.0.1:4096 --username <user>
# Disconnect from a service
heidi connect disconnect ollama --yes
heidi connect disconnect opencode --yes| Service | Default URL | Health Endpoint |
main
| Ollama | http://127.0.0.1:11434 | /api/version |
| OpenCode Server | http://127.0.0.1:4096 | /global/health |
Heidi CLI includes a built-in HTTP server for OpenWebUI integration:
# Start the server (foreground)
heidi serve
# Start with UI
heidi serve --ui
# Run in background and return immediately (writes PID to state dir)
heidi serve --detach
# Disable Rich rendering (useful for CI/non-TTY environments)
heidi serve --plain
# Or use environment variable
HEIDI_PLAIN=1 heidi serve| Option | Description |
|---|---|
--host |
Host to bind to (default: 127.0.0.1) |
--port |
Port to bind to (default: 7777) |
--ui |
Also start UI dev server |
--detach, -d |
Run in background, return immediately (PID file in state dir) |
--plain |
Disable Rich rendering |
--force, -f |
Kill existing server before starting |
# Find and kill by PID file
PID=$(cat ~/.local/state/heidi/server.pid)
kill $PID
# Or kill all heidi servers
pkill -f "heidi serve"The server writes its PID to:
- Linux:
~/.local/state/heidi/server.pid - macOS:
~/Library/Application Support/Heidi/server.pid - Windows:
%LOCALAPPDATA%\Heidi\server.pid
# Check OpenWebUI status
heidi openwebui status
# Get setup guide
heidi openwebui guide
# Configure OpenWebUI settings
heidi openwebui configure --url http://localhost:3000 --token YOUR_TOKENThe server provides these endpoints for OpenWebUI:
GET /health- Health checkGET /agents- List available agentsGET /runs- List recent runsGET /runs/{id}- Get run detailsGET /runs/{id}/stream- Stream run events (SSE)POST /run- Execute single promptPOST /loop- Execute full agent loop
Terminal A - Start Backend and UI:
# From heidi-cli root
cd ui && npm install
# Start both backend + UI (from heidi-cli root)
heidi serve --ui
# Or start separately:
# Terminal A1: heidi serve
# Terminal A2: cd ui && npm run dev -- --port 3000Browser:
http://localhost:3000
Test Checklist:
- Health check works (UI shows connected)
- Run mode → POST /run executes
- Loop mode → POST /loop executes
- Streaming shows live updates (or polling fallback works)
- Run list shows recent runs via /runs?limit=10
Before exposing via Cloudflared:
- Enable API Key Auth:
export HEIDI_API_KEY=your-secret-key
heidi serve- Configure CORS (if needed):
export HEIDI_CORS_ORIGINS=http://localhost:3000,https://your-tunnel-url- UI calls must include:
- Header:
X-Heidi-Key: your-secret-key - For SSE streaming:
/runs/{id}/stream?key=your-secret-key
- Header:
# Install
python -m pip install -e '.[dev]'
# Run tests
pytest -q
# Lint code
ruff check srcLinux/macOS (bash):
bash scripts/smoke_cli.shWindows (PowerShell):
powershell -ExecutionPolicy Bypass -File scripts/smoke_cli.ps1The project includes a landing page hosted on Firebase Hosting at heidi-cli.web.app.
This repository uses git submodules. To clone with all submodules:
git clone --recurse-submodules https://github.com/heidi-dang/heidi-cliOr if you've already cloned:
git submodule update --init --recursiveThe landing page is in heidi-cli-landing-page/. To run it locally:
cd heidi-cli-landing-page
npm install
npm run devThe landing page has automatic CI/CD via GitHub Actions:
- Pull Requests: Deploys to a Firebase preview channel
- Merges to main: Deploys to production live channel
- Create a Firebase project at console.firebase.google.com
- Enable Hosting for your project
- Create a service account:
- Go to Project Settings → Service Accounts
- Click "Generate new private key"
- Copy the JSON content
- Add the following GitHub Secrets:
FIREBASE_SERVICE_ACCOUNT_HEIDI_CLI: The JSON service account key (entire content)FIREBASE_PROJECT_ID: Your Firebase project ID (e.g.,heidi-cli)
For local development, create a .env.local file in heidi-cli-landing-page/:
GEMINI_API_KEY=your-gemini-api-keyNote: .env.local is gitignored and should never be committed.
MIT