Feature Request: Codex Integration Guide + Codex-Aware Observability
Problem
OpenAI Codex has built-in OTel support that emits structured log events and metrics via OTLP (docs). Logfire accepts any OTLP data. In theory these should just work together — but in practice, there's no documentation on either side showing how to connect them, and the config isn't straightforward.
For example, Codex's exporter config looks like this:
[otel.exporter.otlp-http]
endpoint = "https://otel.example.com/v1/logs"
protocol = "binary"
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
But trying to point this at Logfire raises immediate questions:
- Which endpoint? Logfire uses separate paths for signals (
/v1/logs, /v1/traces, /v1/metrics). Codex emits both log events and metrics. Does Codex's exporter append signal-specific paths to a base URL (standard OTel behavior), or does it send everything to the single configured endpoint? The Codex docs don't say.
- Which protocol value? Logfire recommends
http/protobuf. Codex's config uses protocol = "binary". Are these equivalent? Unclear.
- Auth format? Logfire expects
Authorization = "your-write-token" in headers. Codex's example uses a custom header key. Presumably you just swap the header name, but it would be nice to see this confirmed.
- Metrics pipeline? Codex emits OTel counters and histograms (
codex.tool.call, turn.e2e_duration_ms, codex.api_request.duration_ms, etc.) that are separate from the log events. It's unclear whether a single otlp-http exporter config sends both signals or only logs.
My best guess at a working config is:
[otel]
environment = "codex-dev"
log_user_prompt = false
[otel.exporter.otlp-http]
endpoint = "https://logfire-us.pydantic.dev"
protocol = "binary"
headers = { "Authorization" = "${CODEX_LOGFIRE_WRITE_TOKEN}" }
But I have not been able to get this working.
Proposed Solution
1. Documentation: a Codex integration guide
Similar to the existing Airflow integration page, a doc showing the exact Codex TOML config needed to export to Logfire, covering both log events and metrics. Ideally confirming the correct endpoint, protocol value, auth header format, and whether a single exporter config captures both logs and metrics.
2. If needed: any integration work to make the data land cleanly
It's possible this just works today with the right config and no code changes on Logfire's side. But if Codex's OTel output has any quirks that need handling (non-standard signal routing, unusual attribute naming, etc.), a lightweight integration shim — similar to existing ones for Airflow or FastAPI — would make the experience seamless.
What This Unlocks for Users
Once Codex data is flowing into Logfire, users can build on top of it themselves. Codex emits a rich, well-structured telemetry schema, so the possibilities are pretty compelling:
- Session timelines — reconstructed from
codex.conversation_starts through tool calls and completions
- Tool call success/failure analysis — from
codex.tool.call counters, broken down by tool name
- Turn latency tracking — from
turn.e2e_duration_ms histograms
- Approval pattern analysis — from
approval.requested metrics (approved / denied / amended, by tool)
- Model warning monitoring — from
model_warning counters
- Compaction frequency tracking — from
task.compact counters, which may indicate context or prompt issues
- Session-level gap classification — analyzing completed sessions to auto-detect patterns like repeated tool failures, high denial rates, or excessive compactions
This last one is inspired by a pattern Brian Scanlan shared where his team built a post-session hook that analyzes transcripts with a lightweight model, classifies gaps, and posts to Slack with pre-filled GitHub issue URLs. With Codex data in Logfire, users could build similar feedback loops using Logfire's SQL queries and existing tooling.
Why This Fits Logfire
Logfire already accepts the data Codex emits. The immediate gap is documentation — confirming the config works and showing teams how to set it up. Codex is becoming a widely used coding agent, its telemetry schema is well-defined and consistent, and Logfire is a natural home for this data. A small investment in docs (and possibly a lightweight integration) would open up a whole category of use cases.
Feature Request: Codex Integration Guide + Codex-Aware Observability
Problem
OpenAI Codex has built-in OTel support that emits structured log events and metrics via OTLP (docs). Logfire accepts any OTLP data. In theory these should just work together — but in practice, there's no documentation on either side showing how to connect them, and the config isn't straightforward.
For example, Codex's exporter config looks like this:
But trying to point this at Logfire raises immediate questions:
/v1/logs,/v1/traces,/v1/metrics). Codex emits both log events and metrics. Does Codex's exporter append signal-specific paths to a base URL (standard OTel behavior), or does it send everything to the single configured endpoint? The Codex docs don't say.http/protobuf. Codex's config usesprotocol = "binary". Are these equivalent? Unclear.Authorization = "your-write-token"in headers. Codex's example uses a custom header key. Presumably you just swap the header name, but it would be nice to see this confirmed.codex.tool.call,turn.e2e_duration_ms,codex.api_request.duration_ms, etc.) that are separate from the log events. It's unclear whether a singleotlp-httpexporter config sends both signals or only logs.My best guess at a working config is:
But I have not been able to get this working.
Proposed Solution
1. Documentation: a Codex integration guide
Similar to the existing Airflow integration page, a doc showing the exact Codex TOML config needed to export to Logfire, covering both log events and metrics. Ideally confirming the correct endpoint, protocol value, auth header format, and whether a single exporter config captures both logs and metrics.
2. If needed: any integration work to make the data land cleanly
It's possible this just works today with the right config and no code changes on Logfire's side. But if Codex's OTel output has any quirks that need handling (non-standard signal routing, unusual attribute naming, etc.), a lightweight integration shim — similar to existing ones for Airflow or FastAPI — would make the experience seamless.
What This Unlocks for Users
Once Codex data is flowing into Logfire, users can build on top of it themselves. Codex emits a rich, well-structured telemetry schema, so the possibilities are pretty compelling:
codex.conversation_startsthrough tool calls and completionscodex.tool.callcounters, broken down by tool nameturn.e2e_duration_mshistogramsapproval.requestedmetrics (approved / denied / amended, by tool)model_warningcounterstask.compactcounters, which may indicate context or prompt issuesThis last one is inspired by a pattern Brian Scanlan shared where his team built a post-session hook that analyzes transcripts with a lightweight model, classifies gaps, and posts to Slack with pre-filled GitHub issue URLs. With Codex data in Logfire, users could build similar feedback loops using Logfire's SQL queries and existing tooling.
Why This Fits Logfire
Logfire already accepts the data Codex emits. The immediate gap is documentation — confirming the config works and showing teams how to set it up. Codex is becoming a widely used coding agent, its telemetry schema is well-defined and consistent, and Logfire is a natural home for this data. A small investment in docs (and possibly a lightweight integration) would open up a whole category of use cases.