This guide covers building container images and deploying HyperFleet API to Kubernetes using Helm.
Build and push container images:
# Build container image with default tag
make image
# Build with custom tag
make image IMAGE_TAG=v1.0.0
# Build and push to default registry
make image-push
# Build and push to personal Quay registry (for development)
QUAY_USER=myuser make image-devThe default container image is:
quay.io/openshift-hyperfleet/hyperfleet-api:latest
To use a custom container registry:
# Build with custom registry
make image \
IMAGE_REGISTRY=your-registry.io/yourorg \
IMAGE_TAG=v1.0.0
# Push to custom registry
podman push your-registry.io/yourorg/hyperfleet-api:v1.0.0HyperFleet API is configured via environment variables and configuration files.
Kubernetes deployments (recommended):
- Non-sensitive config: ConfigMap (automatically created by Helm Chart from
values.yaml) - Sensitive data: Secrets with
secretKeyRef(Kubernetes best practice, automatic via Helm Chart)
Local development:
- Configuration file:
./configs/config.yamlor--configflag - Environment variables: Direct values for quick testing
See Configuration Guide for complete reference and priority rules.
Configuration Flow in Kubernetes (click to expand)
┌─────────────────────────────────────────────────────────────┐
│ Helm Chart │
│ │
│ values.yaml │
│ ├─ server.port, logging.level, etc. │
│ └─ database.external.secretName │
└──────────────────┬──────────────────────────────────────────┘
│
├─────────────────┬────────────────────────┐
▼ ▼ ▼
┌──────────────────┐ ┌─────────────┐ ┌───────────────┐
│ ConfigMap │ │ Secret │ │ Deployment │
│ │ │ │ │ │
│ Non-sensitive: │ │ Sensitive: │ │ Env vars: │
│ - server.host │ │ - db.host │ │ - HYPERFLEET │
│ - server.port │ │ - db.user │ │ _CONFIG │
│ - logging.level │ │ - db.pass │ │ - secretKeyRef│
└──────┬───────────┘ └──────┬──────┘ └───────┬───────┘
│ │ │
│ │ │
└────────────────────┴────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Pod │
│ │
│ Volume Mounts: │
│ - /etc/hyperfleet/config.yaml │
│ (from ConfigMap) │
│ │
│ Environment Variables: │
│ - HYPERFLEET_CONFIG= │
│ /etc/hyperfleet/config.yaml │
│ - HYPERFLEET_DATABASE_HOST= │
│ (from Secret via secretKeyRef) │
│ - HYPERFLEET_DATABASE_PASSWORD= │
│ (from Secret via secretKeyRef) │
└─────────────┬───────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Application │
│ │
│ 1. Load config from file │
│ (/etc/hyperfleet/config.yaml) │
│ 2. Apply environment variables │
│ 3. Apply CLI flags (if any) │
│ │
│ Priority: Flags > Env Vars > │
│ ConfigMap > Defaults │
└─────────────────────────────────────┘
The API validates cluster and nodepool spec fields against an OpenAPI schema. This allows different providers (GCP, AWS, Azure) to have different spec structures.
- Configuration:
server.openapi_schema_path(supports config file, env var, or CLI flag) - Default:
openapi/openapi.yaml(provider-agnostic base schema)
See Configuration Guide for all configuration options.
HyperFleet API configuration is managed through:
- Helm Chart values (
values.yaml) for Kubernetes deployments - Configuration file (
config.yaml) for local development - Environment variables for overrides
For Kubernetes deployments, the Helm Chart generates:
- ConfigMap from
values.yamlfor non-sensitive configuration - Secret mounts for credentials (using
*_FILEenvironment variables)
Example: Setting required adapters (Helm):
--set 'config.adapters.required.cluster={validation,dns,pullsecret,hypershift}' \
--set 'config.adapters.required.nodepool={validation,hypershift}'Example: Development override (environment variable):
export HYPERFLEET_LOGGING_LEVEL=debugFor complete configuration reference, including all available settings, defaults, and validation rules, see:
- Configuration Guide - Complete reference for all configuration options
- Helm Chart values.yaml - Kubernetes-specific settings
The project includes a Helm chart for Kubernetes deployment with configurable PostgreSQL support.
Deploy with built-in PostgreSQL for development and testing:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--create-namespace \
--set 'config.adapters.required.cluster={validation,dns,pullsecret,hypershift}' \
--set 'config.adapters.required.nodepool={validation,hypershift}'This creates:
- HyperFleet API deployment
- PostgreSQL StatefulSet
- Services for both components
- ConfigMaps and Secrets
Deploy with external database (recommended for production):
kubectl create secret generic hyperfleet-db-external \
--namespace hyperfleet-system \
--from-literal=db.host=<your-db-host> \
--from-literal=db.port=5432 \
--from-literal=db.name=hyperfleet \
--from-literal=db.user=hyperfleet \
--from-literal=db.password=<your-password>helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set database.postgresql.enabled=false \
--set database.external.enabled=true \
--set database.external.secretName=hyperfleet-db-external \
--set 'config.adapters.required.cluster={validation,dns,pullsecret,hypershift}' \
--set 'config.adapters.required.nodepool={validation,hypershift}'How it works:
- Helm Chart creates a ConfigMap with non-sensitive configuration
- Your Secret (created in Step 1) contains database credentials
- Helm Chart injects credentials as environment variables using
secretKeyRef - Application reads credentials from environment variables
- Credentials are never exposed in pod specs or ConfigMaps
This is the Kubernetes-native pattern for handling sensitive data securely.
Deploy with custom container image (e.g., quay.io/myuser/hyperfleet-api:v1.0.0):
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set image.registry=quay.io \
--set image.repository=myuser/hyperfleet-api \
--set image.tag=v1.0.0 \
--set 'config.adapters.required.cluster={validation,dns,pullsecret,hypershift}' \
--set 'config.adapters.required.nodepool={validation,hypershift}'Note: The registry should contain only the registry domain (e.g., quay.io, docker.io). The repository includes the organization and image name (e.g., myuser/hyperfleet-api).
Upgrade to a new version:
helm upgrade hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set image.tag=v1.1.0Remove the deployment:
helm uninstall hyperfleet-api --namespace hyperfleet-system| Parameter | Description | Default |
|---|---|---|
image.registry |
Container registry | quay.io |
image.repository |
Image repository | openshift-hyperfleet/hyperfleet-api |
image.tag |
Image tag | latest |
image.pullPolicy |
Image pull policy | Always |
config.adapters.required.cluster |
Cluster adapters required for Ready state | [] |
config.adapters.required.nodepool |
Nodepool adapters required for Ready state | [] |
config.server.jwt.enabled |
Enable JWT authentication | true |
database.postgresql.enabled |
Enable built-in PostgreSQL | true |
database.external.enabled |
Use external database | false |
database.external.secretName |
Secret containing database credentials | hyperfleet-db-external |
serviceMonitor.enabled |
Enable Prometheus Operator ServiceMonitor | false |
serviceMonitor.interval |
Metrics scrape interval | 30s |
serviceMonitor.scrapeTimeout |
Metrics scrape timeout | 10s |
serviceMonitor.labels |
Additional labels for Prometheus selector | {} |
serviceMonitor.namespace |
Namespace for ServiceMonitor (if different) | "" |
replicaCount |
Number of API replicas | 1 |
resources.limits.cpu |
CPU limit | 500m |
resources.limits.memory |
Memory limit | 512Mi |
podDisruptionBudget.enabled |
Enable PodDisruptionBudget | false |
podDisruptionBudget.minAvailable |
Minimum available pods during disruption | 1 |
podDisruptionBudget.maxUnavailable |
Maximum unavailable pods during disruption | - |
Create a values.yaml file:
# values.yaml
image:
registry: quay.io
repository: myuser/hyperfleet-api
tag: v1.0.0
config:
server:
jwt:
enabled: true
adapters:
required:
cluster:
- validation
- dns
- pullsecret
- hypershift
nodepool:
- validation
- hypershift
database:
postgresql:
enabled: false
external:
enabled: true
secretName: hyperfleet-db-external
replicaCount: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512MiDeploy with custom values:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--values values.yaml# Get deployment status
helm status hyperfleet-api --namespace hyperfleet-system
# List all releases
helm list --namespace hyperfleet-system
# Check pods
kubectl get pods --namespace hyperfleet-system
# Check services
kubectl get svc --namespace hyperfleet-system# View API logs
kubectl logs -f deployment/hyperfleet-api --namespace hyperfleet-system
# View logs from all pods
kubectl logs -f -l app=hyperfleet-api --namespace hyperfleet-system
# View PostgreSQL logs (if using built-in)
kubectl logs -f statefulset/hyperfleet-postgresql --namespace hyperfleet-system# Describe pod for events and status
kubectl describe pod <pod-name> --namespace hyperfleet-system
# Check deployment events
kubectl get events --namespace hyperfleet-system --sort-by='.lastTimestamp'
# Exec into pod for debugging
kubectl exec -it deployment/hyperfleet-api --namespace hyperfleet-system -- /bin/sh
# Check secrets
kubectl get secrets --namespace hyperfleet-system
# Verify ConfigMaps
kubectl get configmaps --namespace hyperfleet-systemThe deployment includes:
- Liveness probe:
GET /healthz(port 8080) - Returns 200 if the process is alive - Readiness probe:
GET /readyz(port 8080) - Returns 200 when ready to receive traffic, 503 during startup/shutdown - Metrics:
GET /metrics(port 9090) - Prometheus metrics endpoint
Scale replicas:
# Manual scaling
kubectl scale deployment hyperfleet-api --replicas=3 --namespace hyperfleet-system
# Via Helm
helm upgrade hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set replicaCount=3Enable autoscaling via Helm values (autoscaling.enabled=true).
Prometheus metrics available at http://<service>:9090/metrics.
For clusters with Prometheus Operator, enable the ServiceMonitor to automatically discover and scrape metrics:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set serviceMonitor.enabled=trueIf your Prometheus requires specific labels for service discovery, add them:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set serviceMonitor.enabled=true \
--set serviceMonitor.labels.release=prometheusTo create the ServiceMonitor in a different namespace (e.g., monitoring):
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set serviceMonitor.enabled=true \
--set serviceMonitor.namespace=monitoringBefore deploying to production, ensure:
- Database: External managed database configured (Cloud SQL, RDS, Azure Database)
- Secrets: Database credentials stored in Secret (not ConfigMap)
- Authentication: JWT enabled (
config.server.jwt.enabled=true) - Adapters: Required adapters specified for cluster and nodepool
- Resources: CPU/memory limits and requests set
- Replicas: Multiple replicas configured (
replicaCount >= 2) - Image: Specific version tag (not
latest) - Disruption: PodDisruptionBudget enabled (
podDisruptionBudget.enabled=true) - Monitoring: ServiceMonitor enabled if using Prometheus Operator
- TLS: HTTPS enabled for API endpoint (optional)
- Use external managed database (Cloud SQL, RDS, Azure Database) with automated backups
- Store all sensitive data in Kubernetes Secrets, never in ConfigMap or values.yaml
- Enable authentication with
config.server.jwt.enabled=true - Set resource limits and use multiple replicas for high availability
- Use specific image tags (semantic versioning) instead of
latest - Enable PodDisruptionBudget for zero-downtime during cluster maintenance
- Configure health probes with appropriate timeouts for your workload
# 1. Build and push image
export QUAY_USER=myuser
podman login quay.io
make image-dev
# 2. Get GKE credentials
gcloud container clusters get-credentials my-cluster \
--zone=us-central1-a \
--project=my-project
# 3. Create namespace
kubectl create namespace hyperfleet-system
kubectl config set-context --current --namespace=hyperfleet-system
# 4. Create database secret (for production)
kubectl create secret generic hyperfleet-db-external \
--from-literal=db.host=10.10.10.10 \
--from-literal=db.port=5432 \
--from-literal=db.name=hyperfleet \
--from-literal=db.user=hyperfleet \
--from-literal=db.password=secretpassword
# 5. Deploy with Helm
helm install hyperfleet-api ./charts/ \
--set image.registry=quay.io \
--set image.repository=myuser/hyperfleet-api \
--set image.tag=dev-abc123 \
--set config.server.jwt.enabled=false \
--set database.postgresql.enabled=false \
--set database.external.enabled=true \
--set 'config.adapters.required.cluster={validation,dns,pullsecret,hypershift}' \
--set 'config.adapters.required.nodepool={validation,hypershift}'
# 6. Verify deployment
kubectl get pods
kubectl logs -f deployment/hyperfleet-api
# 7. Access API (port-forward for testing)
kubectl port-forward svc/hyperfleet-api 8000:8000
curl http://localhost:8000/api/hyperfleet/v1/clusters- Development Guide - Local development setup
- Authentication - Authentication configuration