The runtime that keeps your data where it belongs — on your infrastructure.
The tracebloc client deploys inside your Kubernetes cluster and executes all model training, fine-tuning, and inference locally. It connects to the tracebloc backend for orchestration only. No data, no model weights, no artifacts ever leave your environment.
Your infrastructure
┌─────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────────┐ ┌───────────────────────┐ │
│ │ tracebloc │ │ Kubernetes cluster │ │
│ │ client │◄────►│ │ │
│ │ │ │ ● Training jobs │ │
│ │ Orchestrates │ │ ● Inference jobs │ │
│ │ training, │ │ ● Your datasets │ │
│ │ enforces budgets │ │ ● Fine-tuned weights │ │
│ └────────┬──────────┘ │ │ │
│ │ │ Everything stays here │ │
│ │ └───────────────────────┘ │
└────────────┼────────────────────────────────────────────┘
│
│ Encrypted (orchestration only — no data)
▼
┌─────────────────┐
│ tracebloc │
│ backend │
│ │
│ Coordinates │
│ experiments, │
│ serves web UI │
└─────────────────┘
- Training execution — runs vendor models in isolated, containerized sandboxes
- Compute budgets — enforces per-vendor FLOPs or runtime quotas
- Security boundaries — namespace isolation, encrypted communication, audit logging
- Multi-framework support — PyTorch, TensorFlow, custom containers
- Hardware scheduling — CPUs, GPUs, TPUs via Kubernetes-native orchestration
docker pull tracebloc/client:latestDeployment varies by infrastructure. Follow the guide for your setup:
Full documentation → docs.tracebloc.io
Apache 2.0 — see LICENSE.
Deployment help? support@tracebloc.io or open an issue.