feat(sdk): [Enterprise Integration]: Add provider agnostic traceing#145
feat(sdk): [Enterprise Integration]: Add provider agnostic traceing#145namrataghadi-galileo wants to merge 4 commits intomainfrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
|
Why not emit these from the server? |
|
@lan17 Keeping emission in the SDK gives us one merged, ordered batch and a single integration model for both logger and non-logger flows |
What about other language sdks, like typescript, java, etc.? If we can delegate this to agent control server then it makes sdk easier. Could we just send additional metadata to agent control server when we send events to integrate with Galileo spans/traces? |
|
Server-side emission is attractive for thinner SDKs, but it defeats the main purpose of the logger-based design: reusing Galileo Logger’s in-process trace buffering and flush. For logger integrations, the SDK must have the merged control events locally before flush. Other languages can still be supported through the thinner OTEL non-logger path until they need full logger integration. |
Should this be done outside of core OSS sdk, then? This pattern conflicts with one @abhinav-galileo implemented where we emit events to the server (also via a buffer on SDK), so it maybe confusing to have both systems at same time. |
|
The current PR only adds provider-agnostic hooks to OSS. It does not move logger context into OSS or change the default buffered event-to-server flow. Logger context, span conversion, trace attachment, and flush integration still belong to the external Galileo integration layer, so OSS is not taking on a second concrete observability system. Also, it would be confusing if both were active default systems. It is much less confusing if the existing buffered SDK-to-server flow remains the only default OSS behavior and the new sink/provider APIs are treated purely as optional extension points for external integrations. |
|
I’m also waiting to hear back from both Davids on the RFC and get their perspective on the proposed logger-based and non-logger-based approaches. That said, the hook additions themselves are independent of any Galileo-specific observability or tracing design. They are generic enough to support other third-party observability systems as well. For example, if LangSmith is the external observability framework, these hooks could still be used to publish trace_id and span_id |
lan17
left a comment
There was a problem hiding this comment.
I like the attempt to keep the new telemetry surface generic, but there are still a few code-level gaps before this is ready to merge. I left inline comments on the sink wiring, provider validation, and the partial integration of the provider across SDK entry points.
| _control_event_sink = sink | ||
|
|
||
|
|
||
| def emit_control_events(events: list[ControlExecutionEvent]) -> None: |
There was a problem hiding this comment.
This adds a sink registration API, but no SDK evaluation path ever calls emit_control_events(). Local results still flow through evaluation._emit_local_events() into observability.add_event(), and server results are only merged as EvaluationResult. As written, set_control_event_sink() does not observe real control executions unless callers manually construct ControlExecutionEvent lists themselves.
There was a problem hiding this comment.
I am handling that in another Ticket: https://app.shortcut.com/galileo/story/59787/2-add-merged-control-event-data-flow-to-sdk?vc_group_by=day&ct_workflow=all&cf_workflow=500018000
There was a problem hiding this comment.
The goal was to just add the tracing interfaces in this PR and do heavy lifting in another PR else this PR would have been big and difficult to review
|
|
||
| trace_id = trace_context.get("trace_id") | ||
| span_id = trace_context.get("span_id") | ||
| if not isinstance(trace_id, str) or not isinstance(span_id, str): |
There was a problem hiding this comment.
This validates types, but it still accepts empty strings. A provider returning {"trace_id": "", "span_id": ""} currently passes through as valid context, and the tracing helpers will return those empty IDs unchanged. That can silently drop headers via truthiness checks or produce uncorrelated local events; these values should be rejected here.
| return trace_id, span_id | ||
|
|
||
| # Try external provider | ||
| trace_context = get_trace_context_from_provider() |
There was a problem hiding this comment.
This wires the provider into the tracing helpers, but the public evaluate_controls() path still forwards trace_id / span_id unchanged when callers omit them. So the new provider works for flows that use these helpers, but not for all public SDK evaluation entry points. That inconsistency should be fixed before we rely on this as the generic trace-context hook.
There was a problem hiding this comment.
Good catch. You’re right that just wiring the provider into tracing.py is not enough if the public evaluation path can still bypass it when trace_id / span_id are omitted. I’ll fix that
Summary
Added a new provider-agnostic telemetry package to the AgentControl Python SDK for external trace context resolution and merged control event emission.
Updated tracing to consult a registered external trace context provider before falling back to OTEL context.
Exported the new telemetry APIs from the top-level agent_control package.
Added focused tests to ensure provider/sink failures do not affect existing behavior.
Scope
User-facing/API changes:
New SDK APIs:
Internal changes:
Added:
Risk and Rollout
Risk level: low
Rollback plan:
Revert the new telemetry package and the small tracing/export changes in the SDK.
Since the change is additive and inactive unless a provider or sink is explicitly registered, rollback is straightforward.
Testing
Checklist