aimux
Multiplex your AI agents. Trace, launch, export. Never leave the terminal.
You're running 5 agents across 3 projects. Claude is refactoring auth. Codex is writing tests. A third session is idle, or stuck on a permission prompt? You don't know, because each lives in its own terminal.
aimux is your control plane. One terminal. Every agent. Full visibility.
- See everything: all agents, their status, model, cost, and project in one view
- Trace what happened: every prompt, response, and tool call, turn by turn
- Launch from here: spawn Claude, Codex, or Gemini without leaving the terminal
- Annotate, label, and export: mark turns GOOD/BAD/WASTE, export to MLflow for eval datasets
- Bring your own agent: pluggable provider interface, add a new agent in one Go file
# Homebrew (macOS/Linux)
brew install zanetworker/aimux/aimux
# From source
git clone https://github.com/zanetworker/aimux.git
cd aimux
make install # builds and copies to /usr/local/binThen run:
aimux # launch the TUI
aimux --version # check installed versionRequires tmux for split-pane session embedding.
Auto-finds running Claude, Codex, and Gemini processes. Shows status, model, tokens, cost, git branch, and permission mode. Refreshes every 2s. Multiple sessions in the same project directory appear as separate entries with #1, #2 suffixes. Sort by name, cost, age, model, or PID with s. Subagents spawned via the Agent tool are detected and tracked with provider-aware identity across all three providers.
Press Enter on any agent to open trace + session side by side. Live trace on the left, interactive session on the right. Claude uses direct PTY embedding, Codex and Gemini use tmux mirror.
Full turn-by-turn view of prompts, responses, and tool calls:
17:32 USER fix the authentication bug in login.go
17:32 ASST I'll look at the login.go file...
17:32 TOOL Read /src/auth/login.go
17:32 TOOL Edit /src/auth/login.go [BAD] "deleted wrong file"
Label turns as GOOD, BAD, or WASTE while watching agents work. Add free-text notes. Annotations persist to disk and export with traces.
Press e in the trace pane to export:
jJSONL to~/.aimux/exports/oOTLP to MLflow, Jaeger, or any OTEL backend
Annotations become MLflow feedback assessments for building eval datasets.
Press :new to spawn agents. The launcher walks you through each step:
Pick provider, directory (recent or browse), model, permission mode, runtime (tmux/iTerm), and toggle OTEL tracing. Launches into tmux with telemetry enabled.
Press S from the agent list to browse past sessions across all projects. Resume any session in split view, annotate outcomes, tag failure modes, and generate LLM-powered titles.
- Browse: navigate past sessions sorted by recency with prompt preview
- Resume: press
Enterto reopen a session in split view (trace + live Claude) - Annotate: mark sessions as achieved/partial/failed/abandoned (
akey) - Tag: add failure mode tags with autocomplete (
fkey) - Filter: fuzzy search by prompt, project, annotation, or tags (
/) - Delete: remove sessions permanently (
dkey, with confirmation) - LLM Titles: auto-generate descriptive titles from session content
CLI access:
aimux sessions --list # plain table
aimux sessions --export # JSONL for eval pipelines
aimux sessions --generate-titles # generate titles with Gemini Flash
aimux sessions --regenerate-titles # regenerate all titles
aimux resume <session-id> # resume directlyPress c from the agent list for aggregated token usage and estimated USD spend per project:
Built-in OTLP/HTTP receiver on port 4318 collects live telemetry from spawned agents. Debug anytime: curl http://localhost:4318/debug
Three modes for scaling your agents beyond your laptop:
| Mode | Claude runs | Workers run | How |
|---|---|---|---|
| Local | your machine | your machine | direct subprocess |
| Hybrid | your machine | infra (K8s/EC2) | MCP server dispatches tasks |
| Remote | infra (K8s/EC2) | infra | kubectl exec / SSH attach |
Press :new to open the picker, then choose your mode and provider.
Zero-setup K8s: just point at a running cluster. Aimux auto-creates the namespace, auth secrets (from your local ANTHROPIC_API_KEY or GOOGLE_APPLICATION_CREDENTIALS), and deployments on first spawn. No kubectl apply needed.
Auth options: Vertex AI (GCP ADC), Anthropic API key, or both. Env vars from your local shell are forwarded to pods automatically.
Press T for the tasks view, H or :health for the system health dashboard.
See K8s Quickstart for setup.
Press H or type :health for a unified status dashboard:
System Health
Local Providers
claude ✓ /opt/homebrew/bin/claude v2.1.72 3 active
codex ✗ not installed
gemini ✓ /opt/homebrew/bin/gemini v1.0.4 0 agents
Infrastructure (k8s)
Coordination: ✓ connected
Compute: ✓ connected 2 workloads
- agent-claude-session
- agent-claude-task
View Claude Code team configurations and members via :teams command.
| Key | Where | Action |
|---|---|---|
j/k |
Everywhere | Navigate up/down |
Enter |
Agent list | Split view (trace + session) |
t |
Agent list | Standalone trace view |
c |
Agent list | Cost dashboard |
S |
Agent list | Session history browser |
T |
Agent list | Tasks view |
H |
Agent list | System health dashboard |
Tab |
Split view | Switch focus between panes |
e |
Trace pane | Export menu (j:JSONL, o:OTEL) |
a |
Trace pane | Annotate turn (GOOD/BAD/WASTE) |
N |
Trace pane | Add note to annotated turn |
Ctrl+f |
Split view | Toggle fullscreen on focused pane |
Tab |
Agent list | Expand/collapse process tree (for grouped sessions) |
s |
Agent list | Cycle sort: Name/Cost/Age/Model/PID |
x |
Agent list | Kill agent |
:new |
Anywhere | Launch new agent |
Esc |
Split/trace | Exit to agent list |
? |
Anywhere | Help |
~/.aimux/config.yaml, all settings optional:
providers:
claude:
enabled: true
codex:
enabled: true
gemini:
enabled: false
shell: /bin/zsh
# OTEL receiver: collects live telemetry from spawned agents
otel:
enabled: true
port: 4318
# OTLP export: where to send traces (e → o in trace pane)
export:
endpoint: "localhost:5001"
insecure: true
mlflow:
experiment_id: "1" # required by MLflow
# Infrastructure (K8s) — optional, zero-setup
# Just enable and point at a cluster. Aimux auto-creates namespace,
# secrets, and deployments on first spawn.
kubernetes:
enabled: true
kubeconfig: "" # empty = default kubeconfig
namespace: "agents"
redis_url: "redis://:password@<elb>:6379" # optional: for coordination
team_id: "my-team"
# Session history: LLM-powered title generation
# Requires GEMINI_API_KEY (for flash) or ANTHROPIC_API_KEY (for haiku/sonnet/opus)
sessions:
auto_title: true
title_model: "flash" # flash, haiku, sonnet, opusMLflow setup
# Start MLflow
mlflow server --host 127.0.0.1 --port 5001
# Create an experiment
curl -X POST http://localhost:5001/api/2.0/mlflow/experiments/create \
-H "Content-Type: application/json" \
-d '{"name": "agent-evals"}'
# Returns {"experiment_id": "1"} — put in config aboveIn aimux: Tab to trace pane, a to annotate, e then o to export.
| Provider | Discovery | Trace | Session | OTEL |
|---|---|---|---|---|
| Claude | Process scan + JSONL | Full conversations | Direct PTY embed | Logs via http/protobuf |
| Codex | Process scan + JSONL | Full conversations | Tmux mirror | Traces + logs |
| Gemini | Process scan + JSON | Full conversations (per-session chat files) | Tmux mirror | Traces + logs |
| K8s | Redis heartbeat | OTel Collector | kubectl exec + tmux | Remote collector |
Adding a new provider
Implement the Provider interface (11 methods), register in app.go, add pricing. For infra providers, implement InfraProvider which adds health checks, session spawning, and task management:
type Provider interface {
Name() string
Discover() ([]agent.Agent, error)
ResumeCommand(a agent.Agent) *exec.Cmd
CanEmbed() bool
FindSessionFile(a agent.Agent) string
RecentDirs(max int) []RecentDir
SpawnCommand(dir, model, mode string) *exec.Cmd
SpawnArgs() SpawnArgs
ParseTrace(filePath string) ([]trace.Turn, error)
OTELEnv(endpoint string) string
}See Adding a Provider for the full walkthrough.
No daemon, no hooks, no modifications to your AI tools. Reads from the filesystem:
| Source | Location | Data |
|---|---|---|
| Config | ~/.aimux/config.yaml |
Provider settings, export config |
| Process table | ps aux |
Running local agents |
| Session logs | ~/.claude/projects/*/, ~/.codex/sessions/, ~/.gemini/tmp/*/chats/ |
Conversations, tool calls |
| OTEL receiver | localhost:4318 |
Live telemetry from local agents |
| Redis | redis://<endpoint>:6379 |
K8s agent heartbeats, tasks, costs |
| K8s API | via kubeconfig | Remote agent deployments, pod status |
| OTel Collector | <endpoint>:4317 |
Traces from K8s agents |
| Teams | ~/.claude/teams/*/config.json |
Team membership |
Releases are fully automated via CI. To cut a new release:
git tag v0.5.0
git push origin v0.5.0This triggers the Release workflow which:
- Runs the full test suite (build, vet, test)
- Cross-compiles binaries for darwin/linux (amd64/arm64) via GoReleaser
- Creates a GitHub release with changelog and binaries
- Updates the Homebrew tap formula
Users then upgrade with brew upgrade zanetworker/aimux/aimux.
Do not run goreleaser locally — let CI handle it to avoid duplicate asset conflicts.
Bubble Tea | Lip Gloss | charmbracelet/x/vt | creack/pty | OpenTelemetry







