Skip to content

anoopsimon/k6-api-benchmark

Repository files navigation

k6 performance testing demo

Simple Express API plus a k6 test suite that follows performance-testing best practices (scenarios, thresholds, tagging, reporting).

Prereqs

  • Node.js 18+
  • k6 CLI installed locally. If you don't have it:
    • macOS: brew install k6
    • Windows (winget): winget install k6.k6
    • Windows (Chocolatey): choco install k6
    • Linux (deb/rpm): see https://k6.io/docs/get-started/installation/
    • Docker: docker run -it --rm -v %cd%:/scripts -w /scripts grafana/k6 run k6/perf.js
  • Docker (for Grafana + InfluxDB stack)

Setup

npm install

Run the demo API

npm start
# API listens on http://localhost:3000 (override with PORT env var)

Endpoints:

  • GET /health
  • GET /api/orders?limit=5
  • GET /api/orders/:id
  • POST /api/orders (json: { item, quantity, region })
  • GET /api/slow (adds variable latency)
  • GET /api/heavy (CPU-heavy loop)

k6 tests

Main script: k6/perf.js. It picks a scenario by TEST_TYPE env (smoke|load|stress, defaults to smoke). Targets come from BASE_URL (defaults to http://localhost:3000).

Common thresholds:

  • http_req_failed < 1%
  • http_req_duration p(95) < 750ms
  • order_create_duration p(95) < 800ms
  • checks > 99%

One-liners

# Smoke (light) - good for pipelines
npm run perf:smoke

# Load (ramping) - steady-state behavior
npm run perf:load

# Stress (ramping to failure) - find saturation
npm run perf:stress

Each command writes a JSON summary under artifacts/.

Custom run

BASE_URL=http://localhost:3000 TEST_TYPE=load k6 run k6/perf.js --summary-export=artifacts/custom-summary.json

Reports

Render a quick Markdown report from any summary:

npm run perf:report          # uses artifacts/summary.json
npm run perf:report:smoke    # artifacts/smoke-summary.json -> artifacts/smoke-report.md
npm run perf:report:load     # artifacts/load-summary.json -> artifacts/load-report.md
npm run perf:report:stress   # artifacts/stress-summary.json -> artifacts/stress-report.md

Grafana + InfluxDB (local)

Run a local metrics stack with Docker Compose (Grafana on http://localhost:3001, InfluxDB on http://localhost:8086).

# Start Grafana + InfluxDB
npm run grafana:up

# Push a k6 run into InfluxDB
k6 run k6/perf.js -e BASE_URL=http://localhost:3000 -o influxdb=http://localhost:8086/k6

# Stop stack when finished
npm run grafana:down

Grafana setup (once):

  1. Open http://localhost:3001 (default admin/admin).
  2. Add data source: InfluxDB
    • URL: http://influxdb:8086 (if Grafana inside the compose network) or http://localhost:8086 if you run k6 outside Docker.
    • Database: k6
  3. Import dashboard: ID 2587 (k6 + InfluxDB 1.8) or 17823 (k6 v0.47+).
  4. Re-run k6 with the -o influxdb flag while the dashboard is open to see live charts.

Dashboard preview:

Grafana dashboard

What the test does

  • Uses arrival-rate executors with preallocated/max VUs and clear stages per scenario.
  • Tags requests (endpoint, scenario, testType) to aid breakdowns.
  • Groups logical flows: list orders -> create order -> fetch order -> hit slow/heavy endpoints.
  • Custom metrics for order creation latency/error rate with thresholds on critical paths.
  • Think time (sleep) to mimic user pacing.
  • handleSummary emits a compact stdout view plus JSON for reporting/CI.

Repo structure

  • server.js - demo API for exercising latency and CPU paths.
  • k6/perf.js - k6 script with scenarios, thresholds, tagging, and summary export.
  • scripts/render-report.js - converts a k6 summary JSON to Markdown.
  • package.json - npm scripts for API + perf runs.
  • docker-compose.grafana.yml - Grafana + InfluxDB stack for dashboards.

About

Demo API plus k6 scenarios, thresholds, tagging, and reporting for performance benchmarking

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published