Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
"packages/sdk/server-ai/examples/openai",
"packages/sdk/server-ai/examples/tracked-chat",
"packages/sdk/server-ai/examples/chat-observability",
"packages/sdk/server-ai/examples/openai-observability",
"packages/sdk/server-ai/examples/vercel-ai",
"packages/telemetry/browser-telemetry",
"packages/sdk/combined-browser",
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# LaunchDarkly SDK Key (required)
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key-here

# AI Config key (optional, defaults to 'sample-ai-config')
LAUNCHDARKLY_AI_CONFIG_KEY=sample-ai-config

# Observability service identification (optional)
SERVICE_NAME=hello-js-openai-observability
SERVICE_VERSION=1.0.0

# OpenAI API Key (required)
OPENAI_API_KEY=your-openai-api-key-here
67 changes: 67 additions & 0 deletions packages/sdk/server-ai/examples/openai-observability/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Provider-Specific Observability Example (OpenAI)

This example shows how to use the LaunchDarkly observability plugin when calling an AI provider directly — without the higher-level `createChat` abstraction. It uses OpenAI as the provider, but the same pattern applies to any provider (Bedrock, Anthropic, Vercel AI SDK, etc.).

## How it works

1. **Initialize the LaunchDarkly client** with the `Observability` plugin — this enables automatic capture of SDK operations, flag evaluations, errors, logs, and distributed traces.
2. **Get the AI Config** via `completionConfig()` — this returns the model, messages, and parameters configured in LaunchDarkly, along with a `tracker` for reporting metrics.
3. **Call your provider directly** and wrap it with the tracker — the tracker records latency, token usage, and success/error status.

The tracker provides several methods depending on your provider. This example uses `trackMetricsOf` with the LaunchDarkly OpenAI provider's `getAIMetricsFromResponse` extractor:

| Method | Provider |
|--------|----------|
| `tracker.trackMetricsOf(OpenAIProvider.getAIMetricsFromResponse, fn)` | OpenAI (recommended) |
| `tracker.trackBedrockConverseMetrics(response)` | AWS Bedrock |
| `tracker.trackVercelAISDKGenerateTextMetrics(fn)` | Vercel AI SDK |
| `tracker.trackMetricsOf(extractor, fn)` | Any provider (custom extractor) |

## Prerequisites

1. A LaunchDarkly account and SDK key
2. Node.js 16 or later
3. Node server SDK v9.10 or later (required for the observability plugin)
4. An OpenAI API key

## Setup

1. Install dependencies:

```bash
yarn install
```

2. Set up environment variables:

```bash
cp .env.example .env
```

Edit `.env` and add your keys.

3. Create an AI Config in LaunchDarkly (e.g. key `sample-ai-config`) with a completion-enabled variation and the model you want to use.

## Running the Example

```bash
yarn start
```

This will:
- Initialize the LaunchDarkly client with the observability plugin
- Retrieve the AI Config (model, messages, parameters) from LaunchDarkly
- Call OpenAI directly using your own client
- Automatically track latency, token usage, and success/error via the tracker

View your data in the LaunchDarkly dashboard under **Observability**.

## Adapting for other providers

To use a different provider, replace the OpenAI-specific parts:

1. Swap the OpenAI client for your provider's client
2. Use the appropriate tracker method (see table above), or use `trackMetricsOf` with a custom metrics extractor
3. Map `aiConfig.messages` and `aiConfig.model` to your provider's API format

See the [bedrock](../bedrock/) example for an AWS Bedrock adaptation.
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"name": "openai-observability-example",
"version": "1.0.0",
"description": "LaunchDarkly AI SDK example: provider-specific observability with OpenAI",
"scripts": {
"build": "tsc",
"start": "yarn build && node ./dist/index.js"
},
"dependencies": {
"@launchdarkly/node-server-sdk": "workspace:^",
"@launchdarkly/observability-node": "^1.0.0",
"@launchdarkly/server-sdk-ai": "workspace:^",
"@launchdarkly/server-sdk-ai-openai": "workspace:^",
"@opentelemetry/instrumentation": "^0.57.0",
"@traceloop/instrumentation-openai": "^0.22.0",
"dotenv": "^16.0.0",
"openai": "^5.12.2"
},
"devDependencies": {
"@types/node": "^20.0.0",
"typescript": "^5.0.0"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
/* eslint-disable no-console */
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { OpenAIInstrumentation } from '@traceloop/instrumentation-openai';
import 'dotenv/config';

import { init, type LDContext } from '@launchdarkly/node-server-sdk';
import { Observability } from '@launchdarkly/observability-node';
import { initAi } from '@launchdarkly/server-sdk-ai';

const sdkKey = process.env.LAUNCHDARKLY_SDK_KEY;
const aiConfigKey = process.env.LAUNCHDARKLY_AI_CONFIG_KEY || 'sample-ai-config';

if (!sdkKey) {
console.error('*** Please set the LAUNCHDARKLY_SDK_KEY env first');
process.exit(1);
}

// ── 1. Initialize the LaunchDarkly client with the Observability plugin ──
// The plugin automatically captures SDK operations, flag evaluations,
// error monitoring, logging, and distributed tracing.
const ldClient = init(sdkKey, {
plugins: [
new Observability({
serviceName: process.env.SERVICE_NAME || 'hello-js-openai-observability',
serviceVersion: process.env.SERVICE_VERSION || '1.0.0',
}),
],
});

registerInstrumentations({
instrumentations: [new OpenAIInstrumentation()],
});

const context: LDContext = {
kind: 'user',
key: 'example-user-key',
name: 'Sandy',
};

async function main() {
try {
await ldClient.waitForInitialization({ timeout: 10 });
console.log('*** SDK successfully initialized');
} catch (error) {
console.error(`*** SDK failed to initialize: ${error}`);
process.exit(1);
}

const aiClient = initAi(ldClient);

// ── 2. Import provider and OpenAI after instrumentation so OpenLLMetry can patch the client ──
const { OpenAIProvider } = await import('@launchdarkly/server-sdk-ai-openai');
const { OpenAI } = await import('openai');
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});

// ── 3. Get the AI Config (model, messages, parameters) from LaunchDarkly ──
// `completionConfig` returns the resolved configuration plus a `tracker`
// that you use to report metrics back to LaunchDarkly.
const aiConfig = await aiClient.completionConfig(
aiConfigKey,
context,
{
model: { name: 'gpt-4' },
enabled: false,
},
{ example_type: 'provider_observability_demo' },
);

if (!aiConfig.enabled || !aiConfig.tracker) {
console.log('*** AI configuration is not enabled');
ldClient.close();
process.exit(0);
}

try {
// ── 4. Call OpenAI and track metrics with the provider's extractor ──
const completion = await aiConfig.tracker.trackMetricsOf(
OpenAIProvider.getAIMetricsFromResponse,
() =>
openai.chat.completions.create({
messages: aiConfig.messages || [],
model: aiConfig.model?.name || 'gpt-4',
temperature: (aiConfig.model?.parameters?.temperature as number) ?? 0.5,
max_tokens: (aiConfig.model?.parameters?.maxTokens as number) ?? 4096,
}),
);

console.log('AI Response:', completion.choices[0]?.message.content);
console.log('\nSuccess.');
} catch (err) {
console.error('Error:', err);
} finally {
ldClient.close();
}
}

main();
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "CommonJS",
"moduleResolution": "node",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"outDir": "./dist",
"rootDir": "./src",
"declaration": true,
"sourceMap": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}