Skip to content

towlabs/dashfrog

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

223 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DashFrog

A Grafana your support team can actually use.

DashFrog Status Page

What is DashFrog?

DashFrog is open-source observability built around your customers, not your infrastructure.

It sits on OpenTelemetry but abstracts away the complexity. Customer namespaces are auto-created as you push data. Anyone on your team can explore what's happening with a customer — no PromQL, no trace IDs.

Not a replacement for dev observability. Keep using Datadog, Grafana, or whatever you use for infrastructure monitoring. DashFrog complements them by organizing telemetry per customer — making it easy for support, account managers, and customers themselves to understand what's happening.

Key Features:

  • 🎯 Customer-first - Organize telemetry by customer, not infrastructure
  • Zero config - Customer namespaces auto-created as data arrives
  • 🔍 No query languages - Explore without PromQL or trace IDs
  • 📊 Shareable insights - Give customers visibility into their own data

Try the Demo

See DashFrog in action with a 2-minute demo:

curl -fsSL https://raw.githubusercontent.com/towlabs/dashfrog/main/bin/deploy | bash -s -- --with-demo

This will:

  1. Install DashFrog with Docker Compose
  2. Start the demo generating sample data
  3. Create status page notebooks for 3 customers

Access the UI at http://localhost:8000 (login: admin / admin)

For production: See the Deployment Guide for Kubernetes, custom configuration, and security hardening.

Key concepts

Flows

Flows let you follow a distributed workflow as logical steps.

You define a flow in your code. DashFrog tracks it across services using OpenTelemetry. Your support team sees "customer X's import is stuck at validation" — not span IDs and service graphs.

from dashfrog import flow, step

# Start a flow for a customer
with flow.start(
    name="customer_data_import",  # flow name
    tenant="acme-corp",  # tenant name
    env="prod"  # optional labels
):
    # Each step is tracked
    with step.start("validate_data"):
        # validation logic
        validate_csv(file)

    with step.start("transform_data"):
        # transformation logic
        transform(data)

    with step.start("load_to_database"):
        # database logic
        db.insert(data)

Flow data is automatically available in notebooks, where you can query and visualize workflows per customer.

→ See Flows documentation for distributed flows, error handling, and advanced usage.

Metrics

Metrics use standard OTel under the hood. DashFrog presents them so you don't need to know what a gauge, counter, or histogram is.

from fastapi import FastAPI
from dashfrog import metrics

app = FastAPI()

computation_duration = metrics.Histogram(
    "computation_duration", labels=["env"], pretty_name="Computation Duration", unit="s"
)
computation_count = metrics.Counter("computation_count", labels=["env"], pretty_name="Computations")

@app.get("/heavy-computation/{customer_id}/{env}")
async def heavy_computation(customer_id: str, env: str):
    duration = sleep(3)
    computation_duration.record(duration, tenant=customer_id, env=env)
    computation_count.add(1, tenant=customer_id, env=env)

Metrics data is automatically available in notebooks for querying and visualization.

→ See Metrics documentation for histograms, percentiles, labels, and best practices.

Notebooks

Build customer-specific dashboards with a block-based editor. Combine metrics and flows to create views you can share publicly with customers or use internally for support.

Features:

  • Drill-down into historical data by clicking any metric or flow
  • Share public notebooks via URL
  • Add time annotations for releases, incidents, and events

→ See Notebooks documentation for details

Roadmap

Ideas we're exploring:

  • External data sources(API, Prometheus, ...)
  • Helpdesk integrations (Zendesk, Intercom)
  • Alerting rules
  • Frontend SDK for embedding components in apps

License

MIT License - see LICENSE for details.