Your AI agent's command center — chat, files, memory, skills, and terminal in one place.
Not a chat wrapper. A complete workspace — orchestrate agents, browse memory, manage skills, and control everything from one interface.
v2 — zero-fork. Clone, don't fork. Runs on vanilla
NousResearch/hermes-agentinstalled via Nous's own installer. No patches, no drift.
- 🤖 Hermes Agent Integration — Direct gateway connection with real-time SSE streaming
- 🎨 8-Theme System — Official, Classic, Slate, Mono — each with light and dark variants
- 🔒 Security Hardened — Auth middleware on all API routes, CSP headers, exec approval prompts
- 📱 Mobile-First PWA — Full feature parity on any device via Tailscale
- ⚡ Live SSE Streaming — Real-time agent output with tool call rendering
- 🧠 Memory & Skills — Browse, search, and edit agent memory; explore 2,000+ skills
| Chat | Conductor |
|---|---|
![]() |
![]() |
| Dashboard | Memory |
|---|---|
![]() |
![]() |
| Terminal | Settings |
|---|---|
![]() |
![]() |
| Tasks | Jobs |
|---|---|
![]() |
![]() |
curl -fsSL https://raw.githubusercontent.com/outsourc-e/hermes-workspace/main/install.sh | bashThis installs hermes-agent from PyPI, clones this repo, sets up .env, and installs deps. Then:
hermes gateway run # terminal 1
cd ~/hermes-workspace && pnpm dev # terminal 2Open http://localhost:3000. That's it.
If you already have hermes-agent installed (via Nous's installer, pip install, systemd, Docker, etc.) and it's serving the gateway at http://<host>:8642, you don't need to reinstall anything — just point the workspace at it.
git clone https://github.com/outsourc-e/hermes-workspace.git
cd hermes-workspace
pnpm install
cp .env.example .env
# Point at your existing Hermes services.
echo 'HERMES_API_URL=http://127.0.0.1:8642' >> .env
# Zero-fork installs also need the separate dashboard API for config/sessions/skills/jobs.
echo 'HERMES_DASHBOARD_URL=http://127.0.0.1:9119' >> .env
# If your gateway was started with API_SERVER_KEY (auth enabled), set the same value:
# echo 'HERMES_API_TOKEN=***' >> .env
pnpm dev # http://localhost:3000 (override with PORT=4000 pnpm dev)Requirements on the agent side:
- Gateway bound to an address the workspace can reach (typically
API_SERVER_HOST=0.0.0.0+ the port exposed). API_SERVER_ENABLED=truein~/.hermes/.env(or the agent's env) so the gateway serves core APIs on:8642.hermes dashboardrunning (defaulthttp://127.0.0.1:9119) for zero-fork installs. The dashboard provides config, sessions, skills, and jobs APIs.- If
API_SERVER_KEYis set, the workspace must pass the same value viaHERMES_API_TOKEN— otherwise leave both unset.
Verify both services before opening the workspace:
curl http://127.0.0.1:8642/healthshould return ok.curl http://127.0.0.1:9119/api/statusshould return dashboard metadata.
Then start the workspace and complete onboarding — it should detect the gateway + dashboard pair and unlock the enhanced panes automatically.
If the workspace and its browser live on different machines — e.g. the workspace runs on a Pi/Mac/home server and you access it from your phone over Tailscale — point HERMES_API_URL at the reachable backend address, not 127.0.0.1:
# On the server running the workspace + gateway:
echo 'HERMES_API_URL=http://100.x.y.z:8642' >> .env
echo 'HERMES_DASHBOARD_URL=http://100.x.y.z:9119' >> .env
# Also tell the gateway to listen on all interfaces so Tailscale peers can reach it.
# In ~/.hermes/.env (or wherever the gateway reads config):
echo 'API_SERVER_HOST=0.0.0.0' >> ~/.hermes/.envThen restart the gateway, dashboard, and workspace. Hit the workspace from the remote device and the connection probe will use the Tailscale IP instead of localhost. Both HERMES_API_URL and HERMES_DASHBOARD_URL must be set to Tailscale/LAN-reachable URLs — setting only one will leave the other probing 127.0.0.1 and failing.
If you've already started the workspace, you can update both URLs from Settings → Connection without restarting. The values are persisted to ~/.hermes/workspace-overrides.json and take effect immediately (gateway capabilities are reprobed on save). Editing .env still works for pre-start config and for CI/containers.
Hermes Workspace works with any OpenAI-compatible backend. If your backend also exposes Hermes gateway APIs, enhanced features like sessions, memory, skills, and jobs unlock automatically.
- Node.js 22+ — nodejs.org
- An OpenAI-compatible backend — local, self-hosted, or remote
- Optional: Python 3.11+ if you want to run a Hermes gateway locally
Point Hermes Workspace at any backend that supports:
POST /v1/chat/completionsGET /v1/modelsrecommended
Example Hermes gateway setup (from scratch):
# Install hermes-agent via Nous's official installer
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
# Configure a provider + start the gateway
hermes setup
hermes gateway runOur one-liner installer (below) does both steps automatically. If you're using another OpenAI-compatible server, just note its base URL.
# In a new terminal
git clone https://github.com/outsourc-e/hermes-workspace.git
cd hermes-workspace
pnpm install
cp .env.example .env
printf '\nHERMES_API_URL=http://127.0.0.1:8642\n' >> .env
pnpm dev # Starts on http://localhost:3000Verify: Open
http://localhost:3000and complete the onboarding flow. First connect the backend, then verify chat works. If your gateway exposes Hermes APIs, advanced features appear automatically.
When Hermes Workspace is running behind Agent W's local HTTPS proxy, the managed companion entrypoint is:
https://localhost:4445/chat/newFor local validation from the workspace checkout:
pnpm exec tsc --noEmit
pnpm test
pnpm build
pnpm smoke:managedpnpm smoke:managed checks the managed 4445 surface and fails if the recent
PM2 error log still contains the missing-asset/runtime signatures that show up
when dist drifts under a live server process.
# OpenAI-compatible backend URL
HERMES_API_URL=http://127.0.0.1:8642
# Optional: provider keys the Hermes gateway can read at runtime.
# You only need the key(s) for whichever provider(s) you actually use.
# ANTHROPIC_API_KEY=sk-ant-... # Claude
# OPENAI_API_KEY=sk-... # GPT / o-series
# OPENROUTER_API_KEY=sk-or-v1-... # OpenRouter (incl. free models)
# GOOGLE_API_KEY=AIza... # Gemini
# (Ollama / LM Studio / local servers don't need a key)
# Optional: password-protect the web UI
# HERMES_PASSWORD=your_passwordHermes Workspace supports two modes with local models:
Point the workspace directly at your local server — no Hermes gateway needed.
# Start workspace pointed at Atomic Chat
HERMES_API_URL=http://127.0.0.1:1337/v1 pnpm devDownload Atomic Chat, launch the desktop app, and make sure a model is loaded before starting Hermes Workspace.
# Start Ollama
OLLAMA_ORIGINS=* ollama serve
# Start workspace pointed at Ollama
HERMES_API_URL=http://127.0.0.1:11434 pnpm devChat works immediately. Sessions, memory, and skills show "Not Available" — that's expected in portable mode.
Route through the Hermes gateway for sessions, memory, skills, jobs, and tools.
Here are two explicit ~/.hermes/config.yaml examples for the local providers we support directly in the workspace:
Atomic Chat
provider: atomic-chat
model: your-model-name
custom_providers:
- name: atomic-chat
base_url: http://127.0.0.1:1337/v1
api_key: atomic-chat
api_mode: chat_completionsOllama
provider: ollama
model: qwen3:32b
custom_providers:
- name: ollama
base_url: http://127.0.0.1:11434/v1
api_key: ollama
api_mode: chat_completionsYou can adapt the same shape for other OpenAI-compatible local runners, but Atomic Chat and Ollama are the two built-in local paths documented in the workspace UI.
2. Enable the API server in ~/.hermes/.env:
API_SERVER_ENABLED=true3. Start the gateway, dashboard, and workspace:
hermes gateway run # Starts core APIs on :8642
hermes dashboard # Starts dashboard APIs on :9119
HERMES_API_URL=http://127.0.0.1:8642 \
HERMES_DASHBOARD_URL=http://127.0.0.1:9119 \
pnpm devFor authenticated gateways, also set HERMES_API_TOKEN in the workspace environment to the same value as API_SERVER_KEY.
All workspace features unlock automatically once both services are reachable — sessions persist, memory saves across chats, skills are available, and the dashboard shows real usage data.
Works with any OpenAI-compatible server — Atomic Chat, Ollama, LM Studio, vLLM, llama.cpp, LocalAI, etc. Just change the
base_urlandmodelin the config above.
The Docker setup runs both the Hermes Agent gateway and Hermes Workspace together.
- Docker
- Docker Compose
- Anthropic API Key — Get one here (required for the agent gateway)
git clone https://github.com/outsourc-e/hermes-workspace.git
cd hermes-workspace
cp .env.example .envEdit .env and add at least one LLM provider key — whichever provider you want hermes-agent to use:
# Pick one (or more). You do NOT need all of these.
ANTHROPIC_API_KEY=sk-ant-... # Claude
# OPENAI_API_KEY=sk-... # GPT / o-series
# OPENROUTER_API_KEY=sk-or-v1-... # OpenRouter (free models available)
# GOOGLE_API_KEY=AIza... # GeminiUsing Ollama, LM Studio, or another local server? No key needed — just point hermes-agent at your local endpoint via the onboarding flow.
Heads up:
hermes-agentneeds to be able to reach some model. If you don't configure any provider (API key or local server), chat will fail on first message.
docker compose upThis pulls two pre-built images and starts them:
- hermes-agent →
nousresearch/hermes-agent:lateston port 8642 - hermes-workspace →
ghcr.io/outsourc-e/hermes-workspace:lateston port 3000
No local build. First run takes a minute to pull; subsequent starts are instant.
Agent state (config, sessions, skills, memory, credentials) persists in the
hermes-data named volume, so containers can be recreated without data loss.
Open http://localhost:3000 and complete the onboarding.
Verify: Check the Docker logs for
[gateway] Connected to Hermes— this confirms the workspace successfully connected to the agent.
Want to hack on the workspace or the bundled agent Dockerfile? Use the dev overlay:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up --buildThe base docker-compose.yml stays untouched — the overlay adds build: blocks
that take priority over image:, so both services compile from local source.
Deploying Project Workspace to a PaaS or home-lab stack? Pull the image directly from GitHub Container Registry:
ghcr.io/outsourc-e/hermes-workspace:latest
Available tags:
| Tag | What it is |
|---|---|
latest |
Latest main commit (stable; recommended) |
v2.0.0 |
Pinned semver tag |
main-<sha> |
Specific commit |
Minimal Coolify / Easypanel config:
service: hermes-workspace
image: ghcr.io/outsourc-e/hermes-workspace:latest
port: 3000
env:
HERMES_API_URL: http://hermes-agent:8642 # point at your gateway
HERMES_API_TOKEN: ${API_SERVER_KEY} # if gateway auth is enabledThe image is built for linux/amd64 and linux/arm64. Pair it with either
a nousresearch/hermes-agent:latest container (what our docker-compose.yml
does by default) or an existing gateway on another host.
Hermes Workspace is a Progressive Web App (PWA) — install it for the full native app experience with no browser chrome, keyboard shortcuts, and offline support.
- Open Hermes Workspace in Chrome or Edge at
http://localhost:3000 - Click the install icon (⊕) in the address bar
- Click Install — Hermes Workspace opens as a standalone desktop app
- Pin to Dock / Taskbar for quick access
macOS users: After installing, you can also add it to your Launchpad.
- Open Hermes Workspace in Safari on your iPhone
- Tap the Share button (□↑)
- Scroll down and tap "Add to Home Screen"
- Tap Add — the Hermes Workspace icon appears on your home screen
- Launch from home screen for the full native app experience
- Open Hermes Workspace in Chrome on your Android device
- Tap the three-dot menu (⋮) → "Add to Home screen"
- Tap Add — Hermes Workspace is now a native-feeling app on your device
Access Hermes Workspace from anywhere on your devices — no port forwarding, no VPN complexity.
-
Install Tailscale on your Mac and mobile device:
- Mac: tailscale.com/download
- iPhone/Android: Search "Tailscale" in the App Store / Play Store
-
Sign in to the same Tailscale account on both devices
-
Find your Mac's Tailscale IP:
tailscale ip -4 # Example output: 100.x.x.x -
Open Hermes Workspace on your phone:
http://100.x.x.x:3000 -
Add to Home Screen using the steps above for the full app experience
💡 Tailscale works over any network — home wifi, mobile data, even across countries. Your traffic stays end-to-end encrypted.
Status: In Development — A native Electron-based desktop app is in active development.
The desktop app will offer:
- Native window management and tray icon
- System notifications for agent events and mission completions
- Auto-launch on startup
- Deep OS integration (macOS menu bar, Windows taskbar)
In the meantime: Install Hermes Workspace as a PWA (see above) for a near-native desktop experience — it works great.
Status: Coming Soon
A fully managed cloud version of Hermes Workspace is in development:
- One-click deploy — No self-hosting required
- Multi-device sync — Access your agents from any device
- Team collaboration — Shared mission control for your whole team
- Automatic updates — Always on the latest version
Features pending cloud infrastructure:
- Cross-device session sync
- Team shared memory and workspaces
- Cloud-hosted backend with managed uptime
- Webhook integrations and external triggers
- Real-time SSE streaming with tool call rendering
- Agent-authored artifact events surfaced in the inspector
- Multi-session management with full history
- Markdown + syntax highlighting
- Chronological message ordering with merge dedup
- Inspector panel for session activity, memory, and skills
- Browse and edit agent memory files
- Search across memory entries
- Markdown preview with live editing
- Browse 2,000+ skills from the registry
- View skill details, categories, and documentation
- Skill management per session
- Full workspace file browser
- Navigate directories, preview and edit files
- Monaco editor integration
- Full PTY terminal with cross-platform support
- Persistent shell sessions
- Direct workspace access
- 8 themes: Official, Classic, Slate, Mono — each with light and dark variants
- Theme persists across sessions
- Full mobile dark mode support
- Auth middleware on all API routes
- CSP headers via meta tags
- Path traversal prevention on file/memory routes (real-path boundary check, not string prefix)
- Rate limiting on endpoints
- Fail-closed startup guard: refuses to bind non-loopback without
HERMES_PASSWORD - Session cookies:
HttpOnly+SameSite=Strict+Secure(in production) - Optional password protection for web UI
Key env vars for remote / Docker deployments:
HERMES_PASSWORD— required wheneverHOST ≠ 127.0.0.1COOKIE_SECURE=1— force theSecurecookie flag when terminating HTTPS at a proxyTRUST_PROXY=1— trustx-forwarded-for/x-real-ip(only set behind a sanitizing reverse proxy)HERMES_DASHBOARD_TOKEN— explicit bearer for dashboard API (preferred over the legacy HTML-scrape fallback)HERMES_ALLOW_INSECURE_REMOTE=1— bypass the fail-closed guard (not recommended)
See .env.example for the full list. Credits to @kiosvantra for the security audit surfacing #121–#125.
The workspace auto-detects your gateway's capabilities on startup. Check your terminal for a line like:
[gateway] http://127.0.0.1:8642 available: health, models; missing: sessions, skills, memory, config, jobs
[gateway] Missing Hermes APIs detected. Update hermes-agent to the latest version.
Fix: Upgrade to the latest stock hermes-agent, which ships the extended endpoints:
cd ~/hermes-agent && git pull && uv pip install -e .
hermes gateway run(If you installed via a different path, follow your Nous installer's upgrade instructions.) If you were on the old outsourc-e/hermes-agent fork, it's no longer needed as of v2 — uninstall it and use upstream instead.
Your Hermes gateway isn't running. Start it:
hermes gateway runFirst-time run? Do hermes setup first to pick a provider and model.
Make sure your ~/.hermes/config.yaml has the custom_providers section and API_SERVER_ENABLED=true in ~/.hermes/.env. See Local Models above.
Also ensure Ollama is running with CORS enabled:
OLLAMA_ORIGINS=* ollama serveUse http://127.0.0.1:11434/v1 (not localhost) as the base URL.
Verify: curl http://localhost:8642/health should return {"status": "ok"}.
v2+ runs on vanilla hermes-agent with full feature parity. The upstream ships all extended endpoints (sessions, memory, skills, config). No fork required, ever.
If you're pinned to an older hermes-agent version and missing endpoints, the workspace will degrade gracefully to portable mode with basic chat — upgrade upstream to restore full features.
If using Docker Compose and getting auth errors:
-
Check at least one provider key is set:
grep -E '_API_KEY' .env # Should show one of: ANTHROPIC_API_KEY, OPENAI_API_KEY, OPENROUTER_API_KEY, GOOGLE_API_KEY, ...
(hermes-agent reads whichever key matches the provider configured in
~/.hermes/config.yaml.) -
View the agent container logs:
docker compose logs hermes-agent
Look for startup errors or missing API key warnings.
-
Verify the agent health endpoint:
curl http://localhost:8642/health # Should return: {"status": "ok"} -
Restart with fresh containers:
docker compose down docker compose up --build
-
Check workspace logs for gateway status:
docker compose logs hermes-workspace
Look for:
[gateway] http://hermes-agent:8642 mode=...— if it showsmode=disconnected, the agent isn't running correctly.
The hermes webapi command referenced in older docs doesn't exist. The correct command is:
hermes --gateway # Starts the FastAPI gateway serverThe Docker setup uses hermes --gateway automatically — no action needed if using docker compose up.
| Feature | Status |
|---|---|
| Chat + SSE Streaming | ✅ Shipped |
| Files + Terminal | ✅ Shipped |
| Memory Browser | ✅ Shipped |
| Skills Browser | ✅ Shipped |
| Mobile PWA + Tailscale | ✅ Shipped |
| 8-Theme System | ✅ Shipped |
| Native Desktop App (Electron) | 🔨 In Development |
| Model Switching & Config | 🔨 In Development |
| Chat Abort / Cancel | 🔨 In Development |
| Cloud / Hosted Version | 🔜 Coming Soon |
| Team Collaboration | 🔜 Coming Soon |
Hermes Workspace is free and open source. If it's saving you time and powering your workflow, consider supporting development:
ETH: 0xB332D4C60f6FBd94913e3Fd40d77e3FE901FAe22
Every contribution helps keep this project moving. Thank you 🙏
PRs are welcome! See CONTRIBUTING.md for guidelines.
- Bug fixes → open a PR directly
- New features → open an issue first to discuss
- Security issues → see SECURITY.md for responsible disclosure
MIT — see LICENSE for details.








