Zero-configuration, high-speed log aggregator for local full-stack development — built in Rust.
TailFlow unifies logs from Docker containers, spawned processes, and log files into a single real-time stream. View them in a color-coded terminal UI or a browser dashboard. No Rust toolchain required — install via npx.
npx tailflow --docker- The Problem
- What TailFlow Solves
- Features
- Installation
- Usage
- TUI Keybindings
- Web Dashboard
- HTTP Daemon & SSE API
- Configuration Reference
- Architecture
- Project Layout
- Roadmap
- Contributing
- License
Modern local development stacks are fragmented. A typical session looks like this:
Tab 1: docker compose up
Tab 2: npm run dev
Tab 3: go run ./cmd/api
Tab 4: tail -f logs/worker.log
When something breaks, you're jumping between four windows trying to correlate a timestamp in one tab with an error in another. The cognitive load compounds with every service you add.
Existing tools solve parts of this:
| Tool | Gap |
|---|---|
docker compose logs -f |
Docker containers only — no processes or files |
| Dozzle | Docker-only web UI — can't ingest spawned processes |
| Logdy | One stdin stream at a time — no Docker or multi-source |
| mprocs | Multi-process runner — not log-focused, no filtering |
| lnav | Powerful log file viewer — no Docker or process spawning |
None of them unify all three source types — Docker containers, spawned processes, and log files — in a single filterable, color-coded view with both a TUI and a web UI. That gap is what TailFlow fills.
| Problem | Solution |
|---|---|
| Logs scattered across terminal tabs | Single multiplexed TUI or browser dashboard |
| Hard to correlate events across microservices | All sources share one timestamped stream |
| Docker-only or file-only log tooling | Docker + processes + files in one tool |
| Heavy agents (Datadog, Elastic) for local dev | Rust binary, < 50 MB RAM, no daemon required |
| Per-project tool configuration | tailflow.toml at your monorepo root |
| Terminal-only access to local logs | tailflow-daemon SSE endpoint at localhost:7878 |
- Unified log ingestion — Docker containers (via socket), spawned child processes (
sh -c), tailed log files, and piped stdin - Zero-config startup — drop a
tailflow.tomlat your repo root and runtailflowfrom anywhere inside it - Real-time regex filtering — filter by keyword, source name, or regex in both the TUI and web dashboard
- Color-coded sources — each service gets a distinct color; palette is consistent between the TUI and web UI
- Sub-10ms latency — Tokio async runtime with a broadcast channel; zero polling
- Embedded web dashboard —
tailflow-daemonserves a Preact UI atlocalhost:7878, no separate install - npx-ready —
npx tailflowworks on macOS, Linux, and Windows without installing Rust - Dual binaries —
tailflow(interactive TUI) andtailflow-daemon(headless HTTP + web UI)
The fastest way to get started. Works on macOS (ARM64 + x64), Linux (x64 + ARM64), and Windows x64.
# One-off run — no install needed
npx tailflow --docker
npx tailflow-daemon --docker
# Global install
npm install -g tailflow
tailflow --docker
tailflow-daemon --port 7878npm installs only the binary matching your OS and CPU via platform-specific optional dependencies — the same distribution pattern used by esbuild and Biome.
brew tap thinkgrid-labs/tap
brew install tailflowgit clone https://github.com/thinkgrid-labs/tailflow.git
cd tailflow
cargo install --path crates/tailflow-tui
cargo install --path crates/tailflow-daemontailflow --help
tailflow-daemon --helptailflow --dockertailflow --docker --file logs/app.lognpm run dev | tailflow
go run ./cmd/api | tailflow
python manage.py runserver | tailflowCreate tailflow.toml at your project root to define your full local stack:
[sources]
docker = true
[[sources.process]]
label = "frontend"
cmd = "npm run dev --prefix packages/web"
[[sources.process]]
label = "api"
cmd = "go run ./cmd/api"
[[sources.file]]
path = "logs/worker.log"
label = "worker"Then from anywhere inside the repo:
tailflow # TUI mode
tailflow-daemon # browser mode → open http://localhost:7878TailFlow auto-discovers tailflow.toml by walking up from the current directory.
| Key | Action |
|---|---|
/ |
Enter filter mode |
Enter |
Apply filter and return to stream |
Esc |
Exit filter mode |
j / ↓ |
Scroll down |
k / ↑ |
Scroll up |
G |
Jump to latest log line |
q / Ctrl-C |
Quit |
The filter bar accepts plain text substrings or full regex patterns. Matches against both the log payload and the source name:
# Show only logs from the "api" source
api
# Show error-level lines (case-insensitive)
(?i)error
# Match lines containing a specific request ID
req-[a-f0-9]{8}
# Show output from multiple sources
frontend|api
tailflow-daemon embeds a full Preact web dashboard into its binary. Start the daemon, then open your browser — no extra install or npm run needed:
http://localhost:7878
| Feature | Detail |
|---|---|
| Source sidebar | Active sources with color dots and record counts. Click to isolate a source. |
| Level filter pills | ERR WRN INF DBG TRC — toggle individual log levels on/off. |
| Regex filter bar | Substring or regex, matched against payload and source name. |
| Auto-scroll | Follows new records automatically. Scroll up to pause; ↓ latest button resumes. |
| Consistent colors | Source colors match the TUI palette exactly. |
| 60 fps rendering | Records are batched to requestAnimationFrame cadence — handles high-velocity streams without thrashing. |
The web UI is compiled with Vite + Preact and embedded into the daemon binary via rust-embed. Build it before cargo build:
cd web && npm install && npm run build
cd .. && cargo build -p tailflow-daemon --releaseFor hot-reload development:
# Terminal 1: run the daemon with live sources
cargo run -p tailflow-daemon -- --docker
# Terminal 2: Vite dev server (proxies /events and /api to the daemon)
cd web && npm run dev
# open http://localhost:5173tailflow-daemon runs as a lightweight background process and exposes your local log stream over HTTP. Useful for browser-based inspection or sharing logs with a teammate on the same local network.
tailflow-daemon # auto-discovers tailflow.toml
tailflow-daemon --port 9000 # custom port
tailflow-daemon --docker # Docker only, no config file| Endpoint | Description |
|---|---|
GET /events |
Server-Sent Events stream — one JSON LogRecord per event |
GET /api/records |
Last 500 buffered records as a JSON array |
GET /health |
{"ok": true} liveness check |
GET / |
Embedded Preact web dashboard |
const source = new EventSource("http://localhost:7878/events");
source.onmessage = (e) => {
const record = JSON.parse(e.data);
// { timestamp, source, level, payload }
console.log(`[${record.source}] ${record.payload}`);
};{
"timestamp": "2026-04-04T10:23:45.123Z",
"source": "api",
"level": "error",
"payload": "connection refused: postgres:5432"
}level is one of: trace | debug | info | warn | error | unknown
tailflow.toml is optional. When present, both tailflow and tailflow-daemon load it automatically.
[sources]
# Tail all running Docker containers
docker = false
# Label piped stdin (active only when stdin is not a TTY)
# stdin = "pipe"
# ── File sources ──────────────────────────────────────────
[[sources.file]]
path = "logs/app.log"
label = "app" # optional; defaults to the filename
# ── Process sources ───────────────────────────────────────
# TailFlow spawns these commands and captures stdout + stderr.
[[sources.process]]
label = "frontend"
cmd = "npm run dev"
[[sources.process]]
label = "api"
cmd = "go run ./cmd/api"CLI flags are additive on top of the config file. tailflow --docker adds Docker containers to whatever sources are already defined in tailflow.toml.
TailFlow separates ingestion from presentation through a Tokio broadcast channel. Adding a new UI (desktop app, VS Code extension, etc.) only requires a new consumer — the core engine is untouched.
┌─────────────────────────────────────────────────────────┐
│ tailflow-core │
│ │
│ DockerSource ──┐ │
│ ProcessSource ─┼──► broadcast::channel<LogRecord> ─┐ │
│ FileSource ────┘ │ │
│ StdinSource ───┘ │ │
└──────────────────────────────────────────────────────┼───┘
│
┌────────────────────┬───────────────────┘
│ │
┌────────▼────────┐ ┌────────▼───────────────────────────┐
│ tailflow-tui │ │ tailflow-daemon │
│ │ │ │
│ ratatui TUI │ │ axum HTTP server │
│ color-coded │ │ GET /events (SSE stream) │
│ regex filter │ │ GET /api/records (last 500 JSON) │
│ scroll/search │ │ GET /health │
└─────────────────┘ │ GET /* (embedded web UI) │
└─────────────────────────────────────┘
| Layer | Technology |
|---|---|
| Language | Rust (2021 edition) |
| Async runtime | Tokio |
| Docker integration | bollard |
| File watching | notify |
| TUI framework | ratatui + crossterm |
| HTTP server | axum |
| Web UI | Preact + Vite (embedded via rust-embed) |
| npm distribution | Platform-specific optional dependencies |
tailflow/
├── tailflow.example.toml # annotated config reference
├── Cargo.toml # Rust workspace
├── web/ # Preact web dashboard source
│ ├── package.json
│ ├── vite.config.ts # dev proxy → daemon :7878
│ └── src/
│ ├── App.tsx # layout, filter state, auto-scroll
│ ├── types.ts # LogRecord type, color palette
│ ├── hooks/useLogStream.ts # EventSource + RAF batching
│ └── components/
│ ├── LogRow.tsx
│ └── Sidebar.tsx
├── crates/
│ ├── tailflow-core/ # ingestion engine — no UI dependencies
│ │ └── src/
│ │ ├── lib.rs # LogRecord, LogLevel, broadcast bus
│ │ ├── config.rs # tailflow.toml parser
│ │ └── ingestion/
│ │ ├── docker.rs # bollard: Docker socket
│ │ ├── file.rs # notify: filesystem tail
│ │ ├── process.rs # tokio::process: spawn + capture
│ │ └── stdin.rs # async stdin reader
│ ├── tailflow-tui/ # `tailflow` binary
│ └── tailflow-daemon/ # `tailflow-daemon` binary
├── npm/
│ ├── tailflow/ # published as `tailflow` on npm
│ │ └── bin/run.js # platform detection + spawnSync launcher
│ └── platforms/ # @tailflow/<platform> packages
│ ├── darwin-arm64/
│ ├── darwin-x64/
│ ├── linux-x64/
│ ├── linux-arm64/
│ └── win32-x64/
└── scripts/
├── bump-version.js # sync version across package.json + Cargo.toml
└── pack-local.sh # local build + npm pack for testing
- Web dashboard search bar — live
?grep=filter input in the UI so users don't need to hand-craft query params - Log export — download filtered records as
.ndjsonor.txtfrom the web dashboard - Graceful shutdown — SIGTERM drains in-flight records and flushes the ring buffer before exit
-
--followflag for files — tail from the end by default;--no-followreads the whole file and exits (liketail -fvscat) - Docker Compose integration — auto-discover services from a
docker-compose.ymlin the project root without listing them manually
- Log level filter toggles in TUI — press
e/w/i/dto show/hide Error, Warn, Info, Debug levels; currently only regex filter exists - Persistent log buffer to disk — optional SQLite ring buffer so logs survive daemon restarts and can be queried historically
-
[[sources.http]]webhook receiver — accept POST payloads from external services (Vercel, Render, Fly.io log drains) and ingest them as a named source - Web dashboard dark/light theme toggle — currently hardcoded dark; one
prefers-color-schemeCSS variable swap would cover both - OpenTelemetry / OTLP exporter — forward collected logs to a collector (Grafana Cloud, Honeycomb, Datadog) for teams who want cloud retention without changing their local workflow
- TUI split-pane view — side-by-side panes showing two sources simultaneously; useful when debugging a frontend + backend at the same time
- Plugin system for custom sources — WASM or subprocess-based source plugins so users can add sources (Kafka, Redis pub/sub, AWS CloudWatch) without forking
- AI log summarisation —
skey in TUI calls a local LLM (Ollama) or cloud API to summarise the last N error records into a plain-English diagnosis
Contributions are welcome. Please open an issue before submitting a large PR so we can align on the approach.
# Run the full quality gate locally before pushing
cargo fmt --all
cargo clippy --all-targets --all-features -- -D warnings
cargo test --allThe CI workflow runs fmt, clippy, build, and test on every push and pull request targeting main or dev.
MIT — see LICENSE.