Reproducible HTTP server benchmarks on bare-metal AWS instances. Compare production Go frameworks against theoretical maximum performance using raw syscalls.
Most benchmarks run on shared VMs with noisy neighbors, making results unreliable. This suite runs on dedicated bare-metal instances (c6g.metal, c5.metal) with automated CI/CD, so every release gets consistent, comparable numbers.
We test two categories:
- Baseline: Production frameworks (Gin, Fiber, Echo, Chi, Iris, stdlib)
- Theoretical: Raw epoll/io_uring implementations showing the performance ceiling
Results are committed to results/ on each release. See benchmark charts and raw data for:
- ARM64 (Graviton) on c6g.metal
- x86-64 (Intel) on c5.metal
| Server | Protocol | Framework |
|---|---|---|
| stdhttp | HTTP/1.1, H2C | Go stdlib |
| fiber | HTTP/1.1 | Fiber |
| gin | HTTP/1.1, H2C | Gin |
| chi | HTTP/1.1, H2C | Chi |
| echo | HTTP/1.1, H2C | Echo |
| iris | HTTP/1.1, H2C | Iris |
| Server | Protocol | Implementation |
|---|---|---|
| epoll | HTTP/1.1, H2C | Raw epoll syscalls |
| iouring | HTTP/1.1, H2C | io_uring with multishot (kernel 6.15+) |
# Clone and build
git clone https://github.com/goceleris/benchmarks
cd benchmarks
mage build
# Run a quick local benchmark
mage benchmarkQuick
# Run full benchmark (30s per server)
mage benchmark./bin/bench -mode baseline -duration 30s -connections 256
./bin/bench -mode theoretical -duration 30s -connections 256
./bin/bench -mode all -duration 60s -connections 512| Type | Endpoint | Description |
|---|---|---|
| simple | GET / |
Plain text response |
| json | GET /json |
JSON serialization |
| path | GET /users/:id |
Path parameter extraction |
| body | POST /upload |
4KB request body |
| headers | GET /users/:id |
Realistic API headers (~850 bytes: JWT, cookies, User-Agent) |
Benchmarks run automatically:
- On Release: Full metal benchmark on bare-metal instances
- On PR (with label): Quick validation on smaller instances
The C2 orchestration system manages AWS spot instances, handles capacity fallbacks, and commits results automatically.
Benchmarks run on AWS using a C2 (command and control) server that:
- Provisions spot instances with on-demand fallback across multiple regions
- Selects regions dynamically based on spot pricing and vCPU quota availability
- Coordinates server/client workers across availability zones
- Logs region/AZ placement for each worker in workflow output
- Collects results and generates charts
- Cleans up resources automatically
PRs can deploy their own C2 server by adding a benchmark label (bench-fast, bench-med, or bench-metal), enabling testing of C2 code changes in isolation.
Each benchmark mode uses a fixed instance type to ensure consistent, comparable results:
| Mode | ARM64 Server | ARM64 Client | x86 Server | x86 Client |
|---|---|---|---|---|
| fast | c6g.medium | t4g.small | c5.large | t3.small |
| med | c6g.2xlarge | c6g.xlarge | c5a.2xlarge | c5a.xlarge |
| metal | c6g.metal | c6g.8xlarge | c5.metal | c5.18xlarge |
| Mode | ARM64 vCPUs | x86 vCPUs |
|---|---|---|
| fast | 3 | 4 |
| med | 12 | 12 |
| metal | 96 | 168 |
- Go 1.25.5+: Download
- Mage: Build tool (Go-based Make alternative)
go install github.com/magefile/mage@latest # or on macOS brew install mage
Run mage -l to see all available commands with descriptions.
- Fork and create a feature branch
- Make your changes
- Run
mage checkto verify everything passes - Submit a pull request
- Add
bench-fastlabel to PRs for benchmark validation
- Create package in
servers/baseline/orservers/theoretical/ - Implement all benchmark endpoints
- Register in
cmd/server/main.go
Apache 2.0