Skip to content

Observability-System/data-collection-orchestration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Data Collection Orchestration

This repository contains Kubernetes manifests for the data collection orchestration stack, split into three namespace-scoped parts:

  • observability/: the ObservabilityGateway custom resource that defines the gold, silver, and bronze ingestion backends
  • observability-ingress/: Gateway API resources in front of those backends
  • observability-system/: shared control-plane and metrics services, including Prometheus and a metrics proxy API

The manifests are split across three namespaces:

  • observability: hosts the ObservabilityGateway custom resource and the resulting ingestion services
  • observability-gateway: hosts the external Gateway API resources used for OTLP/HTTP ingress and rate limiting
  • observability-system: hosts Prometheus and the metrics-proxy service used for internal metrics access

Repository Layout

observability/

Contains observability-gateway.yaml, which defines an ObservabilityGateway custom resource named prio-ingestion-gateway in the observability namespace. It declares three traffic classes:

  • gold
  • silver
  • bronze

Each class is expected to expose a corresponding backend Service in the observability namespace:

  • prio-ingestion-gateway-gold
  • prio-ingestion-gateway-silver
  • prio-ingestion-gateway-bronze

observability-ingress/

Contains the external routing layer in the observability-gateway namespace:

  • gateway.yaml: three Gateway resources using the kgateway GatewayClass
  • httproute.yaml: three HTTPRoute resources, one per traffic class
  • rate-limiting.yaml: three TrafficPolicy resources for class-specific local rate limiting
  • reference-grant.yaml: a ReferenceGrant that allows routes in observability-gateway to target Services in observability
  • kustomization.yaml: packages the ingress resources and applies common labels

observability-system/

Contains shared control-plane and metrics infrastructure in the observability-system namespace.

Prometheus

The prometheus/ directory contains:

  • configmap.yaml: a Prometheus configuration that uses Kubernetes service discovery to scrape pods annotated with prometheus.io/scrape=true
  • deployment.yaml: a single-replica Prometheus deployment and a LoadBalancer Service exposed on port 9090
  • rbac.yaml: the service account, cluster role, and bindings required for Kubernetes discovery

The current scrape configuration is focused on gateway-related pod metrics and adds useful labels such as namespace, pod name, and gateway name.

Metrics Proxy

The metrics-proxy/ directory contains a small FastAPI service that provides a stable API in front of Prometheus:

  • exposes POST /observations
  • executes curated PromQL queries from src/queries.yaml
  • aggregates the returned range-vector data into averaged numeric values
  • reads Prometheus from PROM_URL, which is set to http://prometheus:9090 in the in-cluster deployment

Relevant files:

  • service.yaml: the Kubernetes Deployment and Service for metrics-proxy
  • src/main.py: FastAPI app and request handling
  • src/prom_client.py: Prometheus HTTP client
  • src/aggregation.py: result aggregation helpers
  • src/config.py: query configuration loading
  • src/queries.yaml: curated PromQL query definitions
  • Dockerfile, requirements.txt, Makefile: image build and packaging assets
  • README.md: component-level usage and development notes

The observability-system/kustomization.yaml file deploys the Prometheus resources, the metrics-proxy deployment/service, and a generated ConfigMap named metrics-proxy-queries from metrics-proxy/src/queries.yaml.

Architecture

At a high level, the repository defines this flow:

  1. External OTLP/HTTP traffic enters through one of the Gateway resources in observability-gateway.
  2. An HTTPRoute forwards the request to the matching prio-ingestion-gateway-<class> Service in observability.
  3. A TrafficPolicy applies the per-class rate limit.
  4. Prometheus in observability-system scrapes annotated cluster workloads for metrics.
  5. The metrics-proxy service exposes curated Prometheus-backed observations through a simplified HTTP API.

Deployment

Create the required namespaces first:

kubectl create ns observability
kubectl create ns observability-gateway
kubectl create ns observability-system

If you are also using Istio ambient mode in your cluster, apply any namespace labels separately. Those labels are not created by the manifests in this repository.

1. Deploy Control-Plane Services

Deploy Prometheus, the metrics-proxy service, and the generated queries ConfigMap:

kubectl apply -k observability-system

This step creates:

  • Prometheus deployment, service, and RBAC
  • metrics-proxy deployment and service
  • metrics-proxy-queries ConfigMap

2. Deploy Gateway Routing Resources

Deploy the external Gateway API layer:

kubectl apply -k observability-ingress

This step creates:

  • three Gateway resources
  • three HTTPRoute resources
  • three TrafficPolicy resources
  • one ReferenceGrant

3. Deploy the Observability Gateway Custom Resource

Deploy the telemetry ingestion backend definition:

kubectl apply -f observability/observability-gateway.yaml

Before applying the custom resource, make sure the ObservabilityGateway CRD and its controller are already installed in the cluster.

Traffic Classes And Ports

The repository currently defines one gateway listener per traffic class:

  • gold on port 4318
  • silver on port 4319
  • bronze on port 4320

Each route forwards to the matching backend Service in the observability namespace on port 4317:

  • prio-ingestion-gateway-gold
  • prio-ingestion-gateway-silver
  • prio-ingestion-gateway-bronze

Metrics Access

Prometheus is exposed as a LoadBalancer Service on port 9090 in the observability-system namespace.

The metrics-proxy Service is exposed internally as a ClusterIP Service on port 8000 and is intended to provide a smaller, curated API surface over Prometheus rather than exposing PromQL directly to callers.

Notes

  • The ingress design uses one Gateway and one HTTPRoute per traffic class.
  • The ReferenceGrant is created in the observability namespace even though it is packaged under observability-ingress/, because that is the namespace where the target Services live.
  • The top-level README describes the overall data collection orchestration deployment. For local development and API examples for the proxy, see observability-system/metrics-proxy/README.md.

About

System‑level orchestration for the Observability‑System, combining control‑plane agents with data‑plane operators and collectors into a unified deployment and coordination layer.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors