From f171b6631e3333f39f5450a85f74f4e6bdca86a2 Mon Sep 17 00:00:00 2001 From: Priya Date: Wed, 6 May 2026 02:00:54 +0530 Subject: [PATCH 1/2] feat(contact-center): initial changes for implementing get siggested responses feature --- .../real-time-transcripts-ARCHITECTURE.md | 158 ++++++++++ .../real-time-transcripts-DISCOVERY.md | 298 ++++++++++++++++++ .../src/ai-docs/templates/Discovery.prompt.md | 83 +++++ .../ai-docs/templates/Discovery.template.md | 246 +++++++++++++++ packages/@webex/contact-center/src/cc.ts | 5 +- .../@webex/contact-center/src/constants.ts | 1 + .../contact-center/src/metrics/constants.ts | 2 + .../src/services/ApiAiAssistant.ts | 111 ++++++- .../src/services/config/types.ts | 8 + .../src/services/task/TaskManager.ts | 13 +- .../src/services/task/ai-docs/AGENTS.md | 7 + .../src/services/task/ai-docs/ARCHITECTURE.md | 12 + packages/@webex/contact-center/src/types.ts | 23 ++ .../contact-center/test/unit/spec/cc.ts | 2 + .../test/unit/spec/services/ApiAiAssistant.ts | 96 +++++- 15 files changed, 1044 insertions(+), 21 deletions(-) create mode 100644 packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-ARCHITECTURE.md create mode 100644 packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-DISCOVERY.md create mode 100644 packages/@webex/contact-center/src/ai-docs/templates/Discovery.prompt.md create mode 100644 packages/@webex/contact-center/src/ai-docs/templates/Discovery.template.md diff --git a/packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-ARCHITECTURE.md b/packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-ARCHITECTURE.md new file mode 100644 index 00000000000..011cf46a2de --- /dev/null +++ b/packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-ARCHITECTURE.md @@ -0,0 +1,158 @@ +# Real-Time Transcripts — Architecture Addendum + +**Feature:** Real-time transcript streaming for active calls +**Companion Spec:** `real-time-transcripts-DISCOVERY.md` +**Primary Reference:** [SPIKE Discovery: Real Time Transcripts using WxCC-SDK](https://confluence-eng-gpk2.cisco.com/conf/spaces/WSDK/pages/790253498/SPIKE+Discovery+Real+Time+Transcripts+using+WxCC-SDK) + +--- + +## 1) Scope of This Addendum + +This file focuses on architecture and runtime behavior only: + +- event flow +- lifecycle sequencing +- error/recovery transitions +- integration boundaries (`@webex/calling` <-> WxCC consumer) + +For API and payload contracts, use `real-time-transcripts-DISCOVERY.md`. + +--- + +## 2) Component Boundaries + +```mermaid +flowchart LR + App[Application / Agent Desktop] + WxCC[WxCC Service Layer] + Call[Call / ICall] + Events[Typed Eventing: CALL_EVENT_KEYS] + Backend[Transcript Backend Stream] + + App --> WxCC + WxCC --> Call + Call --> Events + Events --> WxCC + WxCC --> App + Call <--> Backend +``` + +--- + +## 3) Lifecycle Sequence + +### 3.1 Start + Stream + Stop + +```mermaid +sequenceDiagram + participant App as Application + participant W as WxCC Service + participant C as Call + participant B as Transcript Backend + + App->>W: subscribeToTranscript(callId) + W->>C: startTranscript(options?) + C->>B: Open transcript stream + B-->>C: stream opened + C-->>W: emit TRANSCRIPT_STARTED + + loop while call active + B-->>C: interim chunk (seq=n) + C-->>W: emit TRANSCRIPT_PARTIAL + B-->>C: final chunk (seq=n+1) + C-->>W: emit TRANSCRIPT_FINAL + end + + App->>W: stopTranscript(callId) OR call disconnect + W->>C: stopTranscript() + C->>B: close stream + C-->>W: emit TRANSCRIPT_STOPPED +``` + +### 3.2 Error and Recovery + +```mermaid +sequenceDiagram + participant W as WxCC Service + participant C as Call + participant B as Transcript Backend + + B--x C: transient network error + C-->>W: emit TRANSCRIPT_ERROR (recoverable=true, retryInMs) + C->>C: state active -> reconnecting + C->>B: retry open stream + + alt recovery succeeds + B-->>C: stream resumed + C->>C: state reconnecting -> active + C-->>W: emit TRANSCRIPT_STARTED (resumed=true) + else recovery fails terminally + C->>C: state reconnecting -> error + C-->>W: emit TRANSCRIPT_ERROR (recoverable=false) + C-->>W: emit TRANSCRIPT_STOPPED + end +``` + +--- + +## 4) Event Flow Contract (Runtime) + +```mermaid +flowchart TD + A[CALL_EVENT_KEYS.ESTABLISHED] --> B{autoStartOnCallEstablished?} + B -- yes --> C[startTranscript] + B -- no --> D[Wait for explicit start] + C --> E[emit TRANSCRIPT_STARTED] + D --> E2[startTranscript by consumer] + E2 --> E + E --> F[emit TRANSCRIPT_PARTIAL / TRANSCRIPT_FINAL] + F --> G{disconnect / explicit stop / terminal error} + G -- stop/disconnect --> H[emit TRANSCRIPT_STOPPED] + G -- terminal error --> I[emit TRANSCRIPT_ERROR] + I --> H +``` + +--- + +## 5) State Model + +```mermaid +stateDiagram-v2 + [*] --> idle + idle --> starting: startTranscript() + starting --> active: stream_opened + active --> reconnecting: transient_error + reconnecting --> active: resumed + reconnecting --> error: terminal_failure + active --> stopped: stop/disconnect + starting --> stopped: stop/disconnect + error --> stopped + stopped --> [*] +``` + +--- + +## 6) Integration Notes for WxCC Consumer + +- Prefer subscribing via typed SDK events instead of raw backend payload parsing. +- Treat `TRANSCRIPT_PARTIAL` as replaceable; treat `TRANSCRIPT_FINAL` as immutable. +- Use `(callId, sequence)` for dedupe and ordering. +- Preserve `recoverable` and `retryInMs` from `TRANSCRIPT_ERROR` for UX hints. + +--- + +## 7) Edge Cases to Validate + +1. Call ends while transcript stream is reconnecting. +2. Duplicate transcript sequence from backend replay. +3. Late partial arrives after final for same segment. +4. Transfer/handoff scenario where call context changes. +5. Consumer unsubscribes before stream close ack. + +--- + +## 8) Traceability + +- Public/API contract: `real-time-transcripts-DISCOVERY.md` +- Package-level architecture: `packages/calling/src/ai-docs/ARCHITECTURE.md` +- Package-level routing/spec guide: `packages/calling/src/ai-docs/AGENTS.md` diff --git a/packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-DISCOVERY.md b/packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-DISCOVERY.md new file mode 100644 index 00000000000..f2ed753445d --- /dev/null +++ b/packages/@webex/contact-center/src/ai-docs/discovery/real-time-transcripts-DISCOVERY.md @@ -0,0 +1,298 @@ +# Discovery: Real-Time Transcripts using WxCC-SDK + +**Feature:** Real-time transcript streaming during active calls +**Primary Reference:** [SPIKE Discovery: Real Time Transcripts using WxCC-SDK](https://confluence-eng-gpk2.cisco.com/conf/spaces/WSDK/pages/790253498/SPIKE+Discovery+Real+Time+Transcripts+using+WxCC-SDK) +**Status:** Draft +**Owner:** TBD +**Last Updated:** 2026-04-01 + +--- + +## 1) Problem Statement + +Applications integrating `@webex/calling` and WxCC need typed, near real-time call transcript events to power assist workflows (agent assist, note support, analytics hooks) while a call is in progress. + +Current call APIs expose call/media lifecycle events but do not define a first-class transcript contract in this package-level spec. + +### In Scope + +- Transcript lifecycle contract for active calls +- Event keys and payload schemas for transcript updates +- Error and reconnection behavior +- Consumer integration points for WxCC-facing services + +### Out of Scope + +- Post-call archive retrieval APIs +- Summarization/LLM generation +- UI rendering details + +--- + +## 2) Goals and Non-Goals + +### Goals + +- Define implementation-ready API and event contracts +- Define payload validation and ordering rules +- Define retry/recovery behavior on network or backend errors +- Define testing and acceptance criteria + +### Non-Goals + +- Persisting transcript history in SDK storage +- New auth model beyond existing call/session auth + +--- + +## 3) Existing System Context + +### Relevant Code Areas + +- `packages/calling/src/Events/types.ts` +- `packages/calling/src/CallingClient/calling/types.ts` +- `packages/calling/src/CallingClient/calling/call.ts` +- `packages/calling/src/CallingClient/calling/callManager.ts` +- `packages/@webex/plugin-cc/src/services/WebCallingService.ts` + +### Existing Specs to Reuse + +- `packages/calling/src/ai-docs/AGENTS.md` +- `packages/calling/src/ai-docs/ARCHITECTURE.md` + +--- + +## 4) Proposed Behavior (End-to-End) + +1. Call reaches established state. +2. Transcript stream starts (auto-start or explicit start API based on config). +3. Transcript events are emitted as partial and final chunks. +4. On recoverable failures, emit typed error and attempt resume. +5. On call end or explicit stop, emit transcript stopped event and finalize stream. + +--- + +## 5) Public API Contract (Proposed) + +### 5.1 Call-level Transcript APIs + +| Module | Interface/Class | Proposed Method | Purpose | +|---|---|---|---| +| Calling | `ICall` / `Call` | `startTranscript(options?: TranscriptOptions): Promise` | Start transcript stream for current call | +| Calling | `ICall` / `Call` | `stopTranscript(reason?: TranscriptStopReason): Promise` | Stop transcript stream | +| Calling | `ICall` / `Call` | `getTranscriptState(): TranscriptState` | Return transcript lifecycle state | + +### 5.2 Config Surface (Proposed) + +```typescript +type TranscriptConfig = { + enabled?: boolean; + autoStartOnCallEstablished?: boolean; + includeInterim?: boolean; + languageCode?: string; + diarization?: boolean; +}; +``` + +Add under calling client config or call-level options based on code-owner decision. + +--- + +## 6) Event Contract (Listen + Emit) + +### 6.1 New Event Keys (Proposed additions in `CALL_EVENT_KEYS`) + +- `TRANSCRIPT_STARTED` +- `TRANSCRIPT_PARTIAL` +- `TRANSCRIPT_FINAL` +- `TRANSCRIPT_STOPPED` +- `TRANSCRIPT_ERROR` + +### 6.2 Event Payload Schemas (Proposed) + +```typescript +type TranscriptBasePayload = { + callId: string; + correlationId: string; + transcriptId?: string; + sequence: number; + timestampMs: number; + speaker?: 'agent' | 'customer' | 'unknown'; + channel?: 'inbound' | 'outbound' | 'mixed'; +}; + +type TranscriptPartialPayload = TranscriptBasePayload & { + text: string; + isFinal: false; + confidence?: number; + startOffsetMs?: number; + endOffsetMs?: number; +}; + +type TranscriptFinalPayload = TranscriptBasePayload & { + text: string; + isFinal: true; + confidence?: number; + startOffsetMs?: number; + endOffsetMs?: number; +}; + +type TranscriptErrorPayload = TranscriptBasePayload & { + code: string; + message: string; + recoverable: boolean; + retryInMs?: number; +}; +``` + +### 6.3 Ordering Rules + +- `TRANSCRIPT_STARTED` emitted once before first transcript chunk. +- `sequence` is monotonically increasing per call. +- `TRANSCRIPT_FINAL` chunks are immutable. +- `TRANSCRIPT_STOPPED` emitted at most once per active transcript session. + +--- + +## 7) Payload Handling Rules + +### Input Validation + +- Ignore chunks with missing `callId` or `sequence`. +- Ignore empty text chunks unless metadata-only updates are explicitly supported. + +### Normalization + +- Normalize whitespace and newline patterns. +- Preserve speaker/channel metadata when available. + +### Deduplication + +- Drop duplicates by `(callId, sequence)` key. + +--- + +## 8) State and Lifecycle + +### Transcript State Model + +`idle -> starting -> active -> stopped` + +Recovery branches: + +- `active -> reconnecting -> active` +- `active|starting|reconnecting -> error -> stopped` (terminal) + +### Trigger Mapping + +- Auto start (if enabled) on `CALL_EVENT_KEYS.ESTABLISHED`. +- Force stop on call disconnect/end. + +--- + +## 9) Error Handling + +### Categories + +- Authentication/authorization +- Backend unavailable/timeouts +- Rate limiting/throttling +- Invalid transcript config + +### Behavior + +- Recoverable failures: emit `TRANSCRIPT_ERROR` with `recoverable = true`, schedule retry. +- Non-recoverable failures: emit `TRANSCRIPT_ERROR`, then `TRANSCRIPT_STOPPED`. + +--- + +## 10) Logging and Metrics + +### Logging + +- `info`: start/stop/resume transcript +- `warn`: retry/recoverable interruption +- `error`: terminal transcript failure + +Context fields: `file`, `method`, `callId`, `correlationId`, `transcriptId`. + +### Metrics (Proposed) + +- `TRANSCRIPT_START_ATTEMPT` +- `TRANSCRIPT_START_SUCCESS` +- `TRANSCRIPT_START_FAILURE` +- `TRANSCRIPT_PARTIAL_RECEIVED` +- `TRANSCRIPT_FINAL_RECEIVED` +- `TRANSCRIPT_RECOVERY_ATTEMPT` +- `TRANSCRIPT_STOP` + +--- + +## 11) Backward Compatibility + +- Additive change only (new APIs/events). +- Existing call flows remain unaffected when transcript feature disabled. + +--- + +## 12) Security and Privacy + +- Treat transcript text as sensitive content. +- Do not log raw transcript payload by default. +- Apply redaction rules to telemetry/log export paths. + +--- + +## 13) Implementation Plan (File-Level) + +| Step | File(s) | Change | +|---|---|---| +| 1 | `packages/calling/src/Events/types.ts` | Add transcript event keys and typed callback signatures | +| 2 | `packages/calling/src/CallingClient/calling/types.ts` | Add transcript payload and state types | +| 3 | `packages/calling/src/CallingClient/calling/call.ts` | Implement transcript lifecycle methods and emit events | +| 4 | `packages/calling/src/CallingClient/types.ts` | Add transcript config surface if needed | +| 5 | `packages/@webex/plugin-cc/src/services/WebCallingService.ts` | Consume and forward transcript events for WxCC | +| 6 | calling/plugin-cc tests | Add unit and integration tests for transcript flows | + +--- + +## 14) Test Plan + +### Unit + +- Transcript start/stop API behavior +- Event order and payload shape validation +- Partial/final dedupe and update behavior +- Error and retry branch coverage + +### Integration + +- Call established -> transcript start +- Network disruption -> transcript recovery behavior +- Call end -> transcript stop and cleanup + +### Negative + +- Invalid payloads +- Missing metadata +- 401/429/5xx backend responses + +--- + +## 15) Acceptance Criteria + +- [ ] Transcript API/event contract finalized with code owners +- [ ] Typed payloads added and validated +- [ ] Recovery/error behavior implemented and tested +- [ ] WxCC consumer path integrated +- [ ] Docs updated in AGENTS/ARCHITECTURE references + +--- + +## 16) Open Questions + +1. Auto-start policy defaults for `calling` vs `contactcenter` flows? +2. Speaker attribution source and confidence handling contract? +3. Behavior across transfer/consult/conference boundaries? +4. Resume vs restart policy after reconnect? +5. Need server-acknowledged sequence checkpointing? diff --git a/packages/@webex/contact-center/src/ai-docs/templates/Discovery.prompt.md b/packages/@webex/contact-center/src/ai-docs/templates/Discovery.prompt.md new file mode 100644 index 00000000000..7b29f8ab480 --- /dev/null +++ b/packages/@webex/contact-center/src/ai-docs/templates/Discovery.prompt.md @@ -0,0 +1,83 @@ +# Prompt Template: Generate `Discovery.md` from Inputs + +Use this prompt with an LLM to generate an implementation-ready `Discovery.md` using `Discovery.template.md`. + +--- + +## 1) Prompt (copy-paste) + +You are an SDK discovery/spec author. +Generate a complete `Discovery.md` by filling the structure from `Discovery.template.md`. + +### Inputs + +1. Source docs (Confluence/JIRA/meeting notes): + - + +2. Relevant code paths: + - + +3. Existing specs: + - + +4. Target package/module: + - + +### Hard Rules + +1. Do not invent existing APIs/events/symbols. + - If not found in code, mark as `Proposed`. +2. Every requirement must have an ID (`REQ-*`) and map to: + - at least one contract section (`API-*`, `EVT-*`, `PAY-*`, `ERR-*`) + - and at least one test case (`TEST-*`). +3. If data is missing: + - write `TBD` + - add an item in `Open Questions`. +4. Use exact enum/type/symbol names where known. +5. Keep `Current State` factual and concise. +6. Keep `Target Behavior` explicit and actionable. +7. Include backward compatibility impact for each API change. +8. Keep output pure Markdown using headings/tables from template. +9. Do not omit sections; use `None` where not applicable. +10. Use ASCII only. + +### Quality Bar + +- The document must be sufficient for implementation planning without re-reading the source docs. +- Contract tables must be complete and internally consistent. +- Test plan must reference contract IDs. + +Now generate the completed `Discovery.md`. + +--- + +## 2) Optional Verification Prompt (recommended second pass) + +After generating `Discovery.md`, run this verification prompt: + +Validate this `Discovery.md` for internal consistency and implementation readiness. + +Checks: + +1. Every `REQ-*` appears in implementation mapping and tests. +2. Every `API-*` has compatibility impact specified. +3. Every emitted/listened event has payload type defined. +4. Every error condition has recoverability and caller action. +5. All `TBD` items appear in `Open Questions`. +6. No contradictory statements between As-Is and To-Be. +7. No fabricated symbols unless marked `Proposed`. + +Output: + +- PASS/FAIL +- list of issues with section references +- minimal edits required to reach PASS + +--- + +## 3) Team Usage Notes + +- Keep one `Discovery.md` per feature. +- Keep contract IDs stable across updates. +- Use changelog section for all revisions. +- In PR descriptions, reference implemented IDs (e.g., `API-002`, `EVT-004`, `TEST-007`). diff --git a/packages/@webex/contact-center/src/ai-docs/templates/Discovery.template.md b/packages/@webex/contact-center/src/ai-docs/templates/Discovery.template.md new file mode 100644 index 00000000000..277702ab414 --- /dev/null +++ b/packages/@webex/contact-center/src/ai-docs/templates/Discovery.template.md @@ -0,0 +1,246 @@ +# Discovery Spec: + +spec_version: 1.0 +feature_id: +status: Draft|Approved|In-Progress|Done +owner: +target_package: +last_updated_utc: +source_of_truth: + - + - + - +related_specs: + - + - + - + +--- + +## 1. Executive Summary + +### 1.1 Problem + + + +### 1.2 Outcome + + + +### 1.3 Scope + +- In scope: + - ... +- Out of scope: + - ... + +--- + +## 2. Inputs and Assumptions + +### 2.1 Inputs Used + +| Source | Link/Ref | Notes | +|---|---|---| +| Confluence | | | +| Prompt | | | +| Code references | | | + +### 2.2 Assumptions + +- A1: ... +- A2: ... + +### 2.3 Open Questions + +- Q1: ... +- Q2: ... + +--- + +## 3. Current State (As-Is) + +### 3.1 Relevant Modules/Files + +- `` +- `` + +### 3.2 Current Behavior + + + +### 3.3 Gaps + +- G1: ... +- G2: ... + +--- + +## 4. Target Behavior (To-Be) + +### 4.1 User/System Flow + +1. ... +2. ... +3. ... + +### 4.2 Alternate Flows + +- AF1: ... +- AF2: ... + +### 4.3 Failure/Retry Behavior + +- ... + +--- + +## 5. Contracts to Implement + +Use stable IDs for traceability: `REQ-*`, `API-*`, `EVT-*`, `PAY-*`, `ERR-*`, `TEST-*`. + +### 5.1 Requirements + +| Req ID | Requirement | Priority | Source | +|---|---|---|---| +| REQ-001 | ... | P0/P1/P2 | Confluence section X | + +### 5.2 Public API Contract + +| API ID | Module | Interface/Class | Method/Property | Signature | Add/Update/No-change | Compatibility | +|---|---|---|---|---|---|---| +| API-001 | ... | ... | ... | ... | Add | Backward compatible | + +### 5.3 Event Contract + +#### Events Listened + +| EVT ID | Consumer | Event Key (enum) | Payload Type | Purpose | +|---|---|---|---|---| + +#### Events Emitted + +| EVT ID | Emitter | Event Key (enum) | Payload Type | Emission Condition | +|---|---|---|---|---| + +#### Ordering/Delivery Rules + +- EVT-ORD-001: ... +- EVT-ORD-002: ... + +### 5.4 Payload Contract + +| PAY ID | Payload Name | Fields | Validation Rules | Notes | +|---|---|---|---|---| +| PAY-001 | ... | ... | ... | ... | + +Optional JSON schema block: + +```json +{ + "$id": "PAY-001", + "type": "object", + "required": ["..."], + "properties": {} +} +``` + +### 5.5 Error Contract + +| ERR ID | Condition | Error Type/Code | Recoverable | Emitted Event | Caller Action | +|---|---|---|---|---|---| + +--- + +## 6. State and Lifecycle + +### 6.1 State Model + +` -> -> ...` + +### 6.2 Transition Rules + +| From | Trigger | To | Side Effects | +|---|---|---|---| + +### 6.3 Concurrency/Idempotency + +- ... + +--- + +## 7. Observability Contract + +### 7.1 Logging + +| Log ID | Level | Message Pattern | Required Context Fields | +|---|---|---|---| + +### 7.2 Metrics + +| Metric ID | Name | Trigger | Dimensions | +|---|---|---|---| + +--- + +## 8. Security/Privacy/Compliance + +- Data classification: +- Sensitive fields: +- Redaction rules: +- Storage/retention implications: + +--- + +## 9. Implementation Mapping + +| Step | File(s) | Change Summary | Contract IDs Covered | +|---|---|---|---| +| 1 | `` | ... | API-001, EVT-002 | +| 2 | `` | ... | PAY-001, ERR-001 | + +--- + +## 10. Test Plan + +### 10.1 Unit Tests + +| TEST ID | Scenario | Expected Result | Contract IDs | +|---|---|---|---| + +### 10.2 Integration Tests + +| TEST ID | Scenario | Expected Result | Contract IDs | +|---|---|---|---| + +### 10.3 Negative/Chaos Tests + +| TEST ID | Scenario | Expected Result | Contract IDs | +|---|---|---|---| + +--- + +## 11. Rollout and Backward Compatibility + +- Feature flag: +- Gradual rollout steps: +- Fallback behavior: +- Migration notes for consumers: + +--- + +## 12. Acceptance Criteria (Definition of Done) + +- [ ] All `REQ-*` mapped to implementation +- [ ] All `API-*`/`EVT-*`/`PAY-*` finalized +- [ ] Tests for all `TEST-*` pass +- [ ] No unresolved P0/P1 open questions +- [ ] Docs updated in package-level specs + +--- + +## 13. Changelog + +| Date (UTC) | Author | Change | +|---|---|---| +| YYYY-MM-DD | | Initial draft | diff --git a/packages/@webex/contact-center/src/cc.ts b/packages/@webex/contact-center/src/cc.ts index c1996135301..7f47822bd03 100644 --- a/packages/@webex/contact-center/src/cc.ts +++ b/packages/@webex/contact-center/src/cc.ts @@ -740,7 +740,10 @@ export default class ContactCenter extends WebexPlugin implements IContactCenter * or other AI Assistant features will also use the same. * If the latter is true, we need to update this condition. */ - if (this.agentConfig.aiFeature?.realtimeTranscripts?.enable) { + if ( + this.agentConfig.aiFeature?.realtimeTranscripts?.enable || + this.agentConfig.aiFeature?.suggestedResponses?.enable + ) { LoggerProxy.info('Connecting to RTD websocket', { module: CC_FILE, method: METHODS.CONNECT_WEBSOCKET, diff --git a/packages/@webex/contact-center/src/constants.ts b/packages/@webex/contact-center/src/constants.ts index 10227fef727..542ffbfcd05 100644 --- a/packages/@webex/contact-center/src/constants.ts +++ b/packages/@webex/contact-center/src/constants.ts @@ -63,5 +63,6 @@ export const METHODS = { GET_OUTDIAL_ANI_ENTRIES: 'getOutdialAniEntries', GET_BASE_URL: 'getBaseUrl', SEND_EVENT: 'sendEvent', + GET_SUGGESTED_RESPONSE: 'getSuggestedResponse', FETCH_HISTORIC_TRANSCRIPTS: 'fetchHistoricTranscripts', }; diff --git a/packages/@webex/contact-center/src/metrics/constants.ts b/packages/@webex/contact-center/src/metrics/constants.ts index a3e718cd5b0..ec2f04e4505 100644 --- a/packages/@webex/contact-center/src/metrics/constants.ts +++ b/packages/@webex/contact-center/src/metrics/constants.ts @@ -166,6 +166,8 @@ export const METRIC_EVENT_NAMES = { // AI Assistant events AI_ASSISTANT_SEND_EVENT_SUCCESS: 'AI Assistant Send Event Success', AI_ASSISTANT_SEND_EVENT_FAILED: 'AI Assistant Send Event Failed', + AI_ASSISTANT_GET_SUGGESTED_RESPONSE_SUCCESS: 'AI Assistant Get Suggested Response Success', + AI_ASSISTANT_GET_SUGGESTED_RESPONSE_FAILED: 'AI Assistant Get Suggested Response Failed', AI_ASSISTANT_FETCH_HISTORIC_TRANSCRIPTS_SUCCESS: 'AI Assistant Fetch Historic Transcripts Success', AI_ASSISTANT_FETCH_HISTORIC_TRANSCRIPTS_FAILED: 'AI Assistant Fetch Historic Transcripts Failed', diff --git a/packages/@webex/contact-center/src/services/ApiAiAssistant.ts b/packages/@webex/contact-center/src/services/ApiAiAssistant.ts index 0bc818082c7..a706360231f 100644 --- a/packages/@webex/contact-center/src/services/ApiAiAssistant.ts +++ b/packages/@webex/contact-center/src/services/ApiAiAssistant.ts @@ -1,3 +1,4 @@ +import {v4 as uuidv4} from 'uuid'; import LoggerProxy from '../logger-proxy'; import MetricsManager from '../metrics/MetricsManager'; import {METRIC_EVENT_NAMES} from '../metrics/constants'; @@ -10,6 +11,7 @@ import { AIAssistantEventType, AIAssistantEventName, HistoricTranscriptsResponse, + SuggestedResponseParams, } from '../types'; import {getErrorDetails} from './core/Utils'; import { @@ -70,6 +72,21 @@ export class ApiAIAssistant { return AI_ASSISTANT_BASE_URL_TEMPLATE.replace('%s', resolvedEnv); } + private static getLanguageCode(): string { + const navigatorLanguage = + typeof globalThis !== 'undefined' && + 'navigator' in globalThis && + globalThis.navigator?.language + ? globalThis.navigator.language + : ''; + + if (navigatorLanguage) { + return navigatorLanguage.split('-')[0] || navigatorLanguage; + } + + return 'en'; + } + /** * Sends an event to the AI Assistant service. * @param agentId - agent identifier @@ -83,13 +100,16 @@ export class ApiAIAssistant { interactionId: string, eventType: AIAssistantEventType, eventName: AIAssistantEventName, - action: TranscriptAction + action?: TranscriptAction, + context?: string, + languageCode?: string, + trackingId?: string ): Promise> { LoggerProxy.info('Sending event', { module: CC_FILE, method: METHODS.SEND_EVENT, interactionId, - data: {eventType, eventName, action}, + data: {eventType, eventName, action, context}, }); this.metricsManager.timeEvent([ METRIC_EVENT_NAMES.AI_ASSISTANT_SEND_EVENT_SUCCESS, @@ -112,7 +132,10 @@ export class ApiAIAssistant { data: { interactionId, action, + context, actionTimeStamp: String(Date.now()), + languageCode, + trackingId, }, }, }, @@ -143,6 +166,90 @@ export class ApiAIAssistant { } } + /** + * Requests a suggested response for an interaction. + * + * @param params - Suggestion request parameters + * @returns HTTP response body from the AI Assistant event API + * @public + */ + public async getSuggestedResponse(params: SuggestedResponseParams): Promise { + const {agentId, interactionId, context} = params; + const trackingId = `WX_CC_SDK_${uuidv4()}`; + const eventName = context + ? AIAssistantEventName.ADD_SUGGESTIONS_EXTRA_CONTEXT + : AIAssistantEventName.GET_SUGGESTIONS; + + LoggerProxy.info('Requesting suggested response', { + module: CC_FILE, + method: METHODS.GET_SUGGESTED_RESPONSE, + interactionId, + }); + + this.metricsManager.timeEvent([ + METRIC_EVENT_NAMES.AI_ASSISTANT_GET_SUGGESTED_RESPONSE_SUCCESS, + METRIC_EVENT_NAMES.AI_ASSISTANT_GET_SUGGESTED_RESPONSE_FAILED, + ]); + + try { + if (!this.aiFeature?.suggestedResponses?.enable) { + const {error: detailedError} = getErrorDetails( + new Error('SUGGESTED_RESPONSES_NOT_ENABLED'), + METHODS.GET_SUGGESTED_RESPONSE, + CC_FILE + ); + throw detailedError; + } + + const orgId = this.webex.credentials.getOrgId(); + + const response = await this.sendEvent( + agentId, + interactionId, + AIAssistantEventType.CUSTOM_EVENT, + eventName, + undefined, + context?.trim(), + ApiAIAssistant.getLanguageCode(), + trackingId + ); + + this.metricsManager.trackEvent( + METRIC_EVENT_NAMES.AI_ASSISTANT_GET_SUGGESTED_RESPONSE_SUCCESS, + { + agentId, + orgId, + interactionId, + eventName, + trackingId, + context, + }, + ['operational'] + ); + + return response; + } catch (error) { + this.metricsManager.trackEvent( + METRIC_EVENT_NAMES.AI_ASSISTANT_GET_SUGGESTED_RESPONSE_FAILED, + { + agentId, + interactionId, + trackingId, + eventName, + error: error instanceof Error ? error.message : String(error), + }, + ['operational'] + ); + + const {error: detailedError} = getErrorDetails( + error, + METHODS.GET_SUGGESTED_RESPONSE, + CC_FILE + ); + throw detailedError; + } + } + /** * Fetches historic transcripts for an interaction. * This API is allowed only when real-time transcription feature is enabled. diff --git a/packages/@webex/contact-center/src/services/config/types.ts b/packages/@webex/contact-center/src/services/config/types.ts index 827c1561177..7e896f905a4 100644 --- a/packages/@webex/contact-center/src/services/config/types.ts +++ b/packages/@webex/contact-center/src/services/config/types.ts @@ -121,6 +121,14 @@ export const CC_TASK_EVENTS = { AGENT_INVITE_FAILED: 'AgentInviteFailed', /** Event emitted when a real-time transcript chunk is received */ REAL_TIME_TRANSCRIPTION: 'REAL_TIME_TRANSCRIPTION', + /** Event emitted when an AI assistant suggested response is available */ + SUGGESTED_RESPONSE: 'SUGGESTED_RESPONSE', + /** Event emitted when backend acknowledges it is listening for more context */ + SUGGESTED_RESPONSE_ACKNOWLEDGE: 'SUGGESTED_RESPONSE_ACKNOWLEDGE', + /** Event emitted when a mid-call summary is available */ + MID_CALL_SUMMARY: 'MID_CALL_SUMMARY', + /** Event emitted when a post-call summary is available */ + POST_CALL_SUMMARY: 'POST_CALL_SUMMARY', } as const; /** diff --git a/packages/@webex/contact-center/src/services/task/TaskManager.ts b/packages/@webex/contact-center/src/services/task/TaskManager.ts index d3aff870343..3d6917c5b00 100644 --- a/packages/@webex/contact-center/src/services/task/TaskManager.ts +++ b/packages/@webex/contact-center/src/services/task/TaskManager.ts @@ -93,7 +93,18 @@ export default class TaskManager extends EventEmitter { return; } - task.emit(payload.type, payload.data); + switch (payload.type) { + case CC_EVENTS.REAL_TIME_TRANSCRIPTION: + case CC_EVENTS.SUGGESTED_RESPONSE: + task.emit(payload.type, payload.data); + break; + case CC_EVENTS.SUGGESTED_RESPONSE_ACKNOWLEDGE: + // TODO: Handling this event + break; + case CC_EVENTS.POST_CALL_SUMMARY: + case CC_EVENTS.MID_CALL_SUMMARY: + break; + } } catch (error) { LoggerProxy.error('Failed to parse RTD WebSocket message', { module: TASK_MANAGER_FILE, diff --git a/packages/@webex/contact-center/src/services/task/ai-docs/AGENTS.md b/packages/@webex/contact-center/src/services/task/ai-docs/AGENTS.md index cc64cb6ceee..4cf82c61930 100644 --- a/packages/@webex/contact-center/src/services/task/ai-docs/AGENTS.md +++ b/packages/@webex/contact-center/src/services/task/ai-docs/AGENTS.md @@ -217,6 +217,13 @@ cc.on('task:incoming', async (task) => { > Full list is defined in `TASK_EVENTS` (`types.ts`). +### AI Assistant events on `task` + +| Event | When Emitted | +| --- | --- | +| `REAL_TIME_TRANSCRIPTION` | A realtime transcript payload is received for the task interaction | +| `SUGGESTED_RESPONSE` | A final AI Assistant suggestion payload is received for the task interaction | + --- ## API Reference diff --git a/packages/@webex/contact-center/src/services/task/ai-docs/ARCHITECTURE.md b/packages/@webex/contact-center/src/services/task/ai-docs/ARCHITECTURE.md index a00dc9d4687..7292a4c7bad 100644 --- a/packages/@webex/contact-center/src/services/task/ai-docs/ARCHITECTURE.md +++ b/packages/@webex/contact-center/src/services/task/ai-docs/ARCHITECTURE.md @@ -400,6 +400,18 @@ this.webSocketManager.on('message', (event) => { }); ``` +### RTD / AI Assistant event routing + +`TaskManager.handleRealtimeWebsocketEvent()` handles payloads arriving on the realtime subscription socket used for AI features. It: + +1. Normalizes the websocket envelope (`payload.data` vs direct payload form) +2. Resolves the owning task via `conversationId` +3. Emits `REAL_TIME_TRANSCRIPTION` on the task for transcript payloads +4. Emits `SUGGESTED_RESPONSE` on the task only when the backend payload is a final suggestion (`data.type === 'SUGGESTION'`) +5. Ignores `SUGGESTED_RESPONSE_ACKNOWLEDGE` for public SDK emission + +This keeps transcript and suggestion delivery aligned on the same per-task event surface. + --- ## WebRTC Integration diff --git a/packages/@webex/contact-center/src/types.ts b/packages/@webex/contact-center/src/types.ts index 4aadb298f02..fe7441311c9 100644 --- a/packages/@webex/contact-center/src/types.ts +++ b/packages/@webex/contact-center/src/types.ts @@ -846,6 +846,25 @@ export type UpdateDeviceTypeResponse = Agent.DeviceTypeUpdateSuccess | Error; */ export type TranscriptAction = 'START' | 'STOP'; +/** + * Parameters used to request an AI Assistant suggested response. + * @public + * @example + * const params: SuggestedResponseParams = { + * interactionId: 'interaction-123', + * actionTimeStamp: Date.now(), + * context: 'Need help with credit card payment due date', + * }; + */ +export type SuggestedResponseParams = { + /** Agent identifier */ + agentId: string; + /** Interaction identifier for which suggestion should be generated */ + interactionId: string; + /** Optional additional context that should refine the suggestion */ + context?: string; +}; + /** * Supported AI Assistant event categories. * @public @@ -879,6 +898,10 @@ export type AIAssistantEventType = Enum; export const AIAssistantEventName = { /** Request transcript streaming for an interaction */ GET_TRANSCRIPTS: 'GET_TRANSCRIPTS', + /** Request a suggested response for an interaction */ + GET_SUGGESTIONS: 'GET_SUGGESTIONS', + /** Add extra context to refine a suggested response */ + ADD_SUGGESTIONS_EXTRA_CONTEXT: 'ADD_SUGGESTIONS_EXTRA_CONTEXT', /** Request mid-call summary generation */ GET_MID_CALL_SUMMARY: 'GET_MID_CALL_SUMMARY', /** Request post-call summary generation */ diff --git a/packages/@webex/contact-center/test/unit/spec/cc.ts b/packages/@webex/contact-center/test/unit/spec/cc.ts index 8d321aca5ee..fa6b7c8fc58 100644 --- a/packages/@webex/contact-center/test/unit/spec/cc.ts +++ b/packages/@webex/contact-center/test/unit/spec/cc.ts @@ -110,8 +110,10 @@ describe('webex.cc', () => { mockApiAIAssistant = { sendEvent: jest.fn(), + getSuggestedResponse: jest.fn(), fetchHistoricTranscripts: jest.fn(), setAIFeatureFlags: jest.fn(), + setAgentId: jest.fn(), }; // Mock Services instance diff --git a/packages/@webex/contact-center/test/unit/spec/services/ApiAiAssistant.ts b/packages/@webex/contact-center/test/unit/spec/services/ApiAiAssistant.ts index 251153c2046..a7b60566103 100644 --- a/packages/@webex/contact-center/test/unit/spec/services/ApiAiAssistant.ts +++ b/packages/@webex/contact-center/test/unit/spec/services/ApiAiAssistant.ts @@ -57,23 +57,18 @@ describe('ApiAIAssistant', () => { 'START' ); - expect(mockWebex.request).toHaveBeenCalledWith({ - uri: 'https://api-ai-assistant.produs1.ciscoccservice.com/event', - method: HTTP_METHODS.POST, - addAuthHeader: true, - body: { - agentId: 'test-agent-id', - orgId: 'test-org-id', - eventType: 'CUSTOM_EVENT', - eventName: 'GET_TRANSCRIPTS', - eventDetails: { - data: expect.objectContaining({ - interactionId: 'interaction-1', - action: 'START', - }), - }, - }, - }); + expect(mockWebex.request).toHaveBeenCalledTimes(1); + const requestArgs = (mockWebex.request as jest.Mock).mock.calls[0][0]; + + expect(requestArgs.uri).toBe('https://api-ai-assistant.produs1.ciscoccservice.com/event'); + expect(requestArgs.method).toBe(HTTP_METHODS.POST); + expect(requestArgs.addAuthHeader).toBe(true); + expect(requestArgs.body.agentId).toBe('test-agent-id'); + expect(requestArgs.body.orgId).toBe('test-org-id'); + expect(requestArgs.body.eventType).toBe('CUSTOM_EVENT'); + expect(requestArgs.body.eventName).toBe('GET_TRANSCRIPTS'); + expect(requestArgs.body.eventDetails.data.interactionId).toBe('interaction-1'); + expect(requestArgs.body.eventDetails.data.action).toBe('START'); expect(result).toEqual({ok: true}); }); @@ -97,6 +92,57 @@ describe('ApiAIAssistant', () => { expect(result).toEqual(responseBody as any); }); + it('should request suggested response without extra context using sendEvent', async () => { + const sendEventSpy = jest.spyOn(apiAIAssistant, 'sendEvent').mockResolvedValue({ok: true}); + apiAIAssistant.setAIFeatureFlags({suggestedResponses: {enable: true}} as any); + + const result = await apiAIAssistant.getSuggestedResponse({ + agentId: 'test-agent-id', + interactionId: 'interaction-1', + }); + + expect(sendEventSpy).toHaveBeenCalledTimes(1); + const [agentId, interactionId, eventType, eventName, action, context, languageCode, trackingId] = + sendEventSpy.mock.calls[0]; + + expect(agentId).toBe('test-agent-id'); + expect(interactionId).toBe('interaction-1'); + expect(eventType).toBe('CUSTOM_EVENT'); + expect(eventName).toBe('GET_SUGGESTIONS'); + expect(action).toBeUndefined(); + expect(context).toBeUndefined(); + expect(languageCode).toBe('en'); + expect(typeof trackingId).toBe('string'); + expect(trackingId.startsWith('WX_CC_SDK_')).toBe(true); + expect(result).toEqual({ok: true}); + }); + + it('should request suggested response with extra context using sendEvent', async () => { + const sendEventSpy = jest.spyOn(apiAIAssistant, 'sendEvent').mockResolvedValue({ok: true}); + apiAIAssistant.setAIFeatureFlags({suggestedResponses: {enable: true}} as any); + + const result = await apiAIAssistant.getSuggestedResponse({ + agentId: 'test-agent-id', + interactionId: 'interaction-1', + context: 'Need assistance with credit card payment due date', + }); + + expect(sendEventSpy).toHaveBeenCalledTimes(1); + const [agentId, interactionId, eventType, eventName, action, context, languageCode, trackingId] = + sendEventSpy.mock.calls[0]; + + expect(agentId).toBe('test-agent-id'); + expect(interactionId).toBe('interaction-1'); + expect(eventType).toBe('CUSTOM_EVENT'); + expect(eventName).toBe('ADD_SUGGESTIONS_EXTRA_CONTEXT'); + expect(action).toBeUndefined(); + expect(context).toBe('Need assistance with credit card payment due date'); + expect(languageCode).toBe('en'); + expect(typeof trackingId).toBe('string'); + expect(trackingId.startsWith('WX_CC_SDK_')).toBe(true); + expect(result).toEqual({ok: true}); + }); + it('should fail when base URL mapping is not available', async () => { (mockWebex.internal.services.get as jest.Mock).mockReturnValue('https://unknown-host.invalid'); @@ -129,4 +175,20 @@ describe('ApiAIAssistant', () => { expect(errorMessage).toBe('Error while performing fetchHistoricTranscripts'); }); + + it('should fail when suggested responses feature is disabled', async () => { + apiAIAssistant.setAIFeatureFlags({suggestedResponses: {enable: false}} as any); + let errorMessage = ''; + + try { + await apiAIAssistant.getSuggestedResponse({ + agentId: 'test-agent-id', + interactionId: 'interaction-1', + }); + } catch (error) { + errorMessage = (error as Error)?.message || ''; + } + + expect(errorMessage).toBe('Error while performing getSuggestedResponse'); + }); }); From 48f08b217c34ac7c158779f02e1694d7058afda5 Mon Sep 17 00:00:00 2001 From: Priya Date: Thu, 7 May 2026 22:16:27 +0530 Subject: [PATCH 2/2] feat(contact-center): sample app changes --- docs/samples/contact-center/app.js | 320 +++++++++++++++++++++++++ docs/samples/contact-center/index.html | 29 +++ docs/samples/contact-center/style.css | 195 ++++++++++++++- 3 files changed, 540 insertions(+), 4 deletions(-) diff --git a/docs/samples/contact-center/app.js b/docs/samples/contact-center/app.js index 70a0c4fec8f..7b7dba9a076 100644 --- a/docs/samples/contact-center/app.js +++ b/docs/samples/contact-center/app.js @@ -97,6 +97,10 @@ const liveTranscriptTabElm = document.querySelector('#transcript-tab-live'); const ivrTranscriptTabElm = document.querySelector('#transcript-tab-ivr'); const liveTranscriptPaneElm = document.querySelector('#transcript-live-pane'); const ivrTranscriptPaneElm = document.querySelector('#transcript-ivr-pane'); +const aiAssistantContentElm = document.querySelector('#ai-assistant-content'); +const aiAssistantContextInputElm = document.querySelector('#assistant-context-input'); +const aiAssistantActionBtn = document.querySelector('#get-assistance'); +const aiAssistantContextBtn = document.querySelector('#send-assistant-context'); const multiLoginCheckbox = document.querySelector('#multiLoginFlag'); deregisterBtn.style.backgroundColor = 'red'; @@ -112,6 +116,9 @@ function toggleMultiLogin() { const transcriptEntries = []; const MAX_TRANSCRIPT_LINES = 200; +const MAX_AI_ASSISTANT_ENTRIES = 50; +const registeredTaskListeners = new WeakSet(); +const aiAssistantStateByInteraction = new Map(); function formatTranscriptTime(epochMillis) { if (!epochMillis || typeof epochMillis !== 'number') { @@ -204,6 +211,258 @@ function appendRealtimeTranscript(payload) { setTranscriptTab('live'); } +function getAiAssistantState(interactionId) { + if (!aiAssistantStateByInteraction.has(interactionId)) { + aiAssistantStateByInteraction.set(interactionId, { + listening: false, + entries: [], + error: '', + }); + } + + return aiAssistantStateByInteraction.get(interactionId); +} + +function formatAssistantTime(epochMillis) { + if (!epochMillis || typeof epochMillis !== 'number') { + return '--:--'; + } + + return new Date(epochMillis).toLocaleTimeString([], {minute: '2-digit', second: '2-digit'}); +} + +function trimAiAssistantEntries(state) { + if (state.entries.length > MAX_AI_ASSISTANT_ENTRIES) { + state.entries.splice(0, state.entries.length - MAX_AI_ASSISTANT_ENTRIES); + } +} + +function getAdaptiveCardTextLines(node, lines = []) { + if (!node) { + return lines; + } + + if (node.type === 'TextBlock' && typeof node.text === 'string' && node.text.trim()) { + lines.push(node.text.trim()); + } + + if (node.type === 'RichTextBlock' && Array.isArray(node.inlines)) { + const inlineText = node.inlines + .map((inline) => String(inline?.text || '').trim()) + .join(' ') + .trim(); + + if (inlineText) { + lines.push(inlineText); + } + } + + ['body', 'items', 'columns'].forEach((key) => { + if (!Array.isArray(node[key])) { + return; + } + + node[key].forEach((child) => getAdaptiveCardTextLines(child, lines)); + }); + + return lines; +} + +function getCustomerQueryText(suggestionNode) { + if (typeof suggestionNode?.customerQuery === 'string' && suggestionNode.customerQuery.trim()) { + return suggestionNode.customerQuery.trim(); + } + + if (typeof suggestionNode?.query === 'string' && suggestionNode.query.trim()) { + return suggestionNode.query.trim(); + } + + const lines = getAdaptiveCardTextLines(suggestionNode?.adaptiveCard).filter((line) => { + const normalizedLine = line.toLowerCase(); + + return normalizedLine !== 'the customer said:' && normalizedLine !== 'customer said:'; + }); + + return lines[lines.length - 1] || ''; +} + +function normalizeSuggestedResponse(payload) { + const suggestionNode = payload?.data || {}; + const rawType = typeof suggestionNode?.type === 'string' ? suggestionNode.type.toUpperCase() : ''; + + if (!rawType) { + return null; + } + + const id = suggestionNode?.adaptiveCardId || suggestionNode?.trackingId || `${rawType}-${Date.now()}`; + const timestamp = suggestionNode?.suggestionInputTimestamp || suggestionNode?.publishTimestamp || Date.now(); + + if (rawType === 'CUSTOMER_QUERY') { + const text = getCustomerQueryText(suggestionNode); + + if (!text) { + return null; + } + + return { + id, + type: 'customer-query', + label: 'The customer said:', + text, + timestamp, + }; + } + + if (rawType !== 'SUGGESTION') { + return null; + } + + return { + id, + type: 'suggestion', + title: suggestionNode?.title || 'Suggested response', + suggestion: suggestionNode?.suggestion || '', + source: suggestionNode?.suggestionSource || '', + timestamp, + }; +} + +function renderAiAssistantPanel() { + if (!aiAssistantContentElm || !aiAssistantActionBtn || !aiAssistantContextInputElm) { + return; + } + + aiAssistantContentElm.innerHTML = ''; + + const interactionId = currentTask?.data?.interactionId; + const hasSelectedTask = Boolean(interactionId); + aiAssistantActionBtn.disabled = !hasSelectedTask; + aiAssistantContextInputElm.disabled = !hasSelectedTask; + if (aiAssistantContextBtn) { + aiAssistantContextBtn.disabled = !hasSelectedTask; + } + + if (!hasSelectedTask) { + const emptyStateElm = document.createElement('div'); + emptyStateElm.className = 'assistant-empty-state'; + emptyStateElm.textContent = 'Select an active task to request AI assistance.'; + aiAssistantContentElm.appendChild(emptyStateElm); + return; + } + + const state = getAiAssistantState(interactionId); + const introElm = document.createElement('div'); + introElm.className = 'assistant-intro'; + introElm.innerHTML = ` + +
I'm here to help! I'll keep listening and suggest responses as the conversation evolves.
+ `; + aiAssistantContentElm.appendChild(introElm); + + if (!state.entries.length && !state.listening && !state.error) { + const welcomeElm = document.createElement('div'); + welcomeElm.className = 'assistant-empty-state'; + welcomeElm.textContent = 'Click "Get assistance" to start generating suggested responses.'; + aiAssistantContentElm.appendChild(welcomeElm); + } + + state.entries.forEach((entry) => { + if (entry.type === 'request' && entry.text) { + const requestElm = document.createElement('div'); + requestElm.className = 'assistant-request'; + requestElm.textContent = entry.text; + aiAssistantContentElm.appendChild(requestElm); + return; + } + + if (entry.type === 'customer-query' && entry.text) { + const customerQueryElm = document.createElement('div'); + customerQueryElm.className = 'assistant-customer-query'; + customerQueryElm.innerHTML = ` +
${entry.label}
+
+ `; + customerQueryElm.querySelector('.assistant-customer-query__body').textContent = entry.text; + aiAssistantContentElm.appendChild(customerQueryElm); + return; + } + + if (entry.type === 'suggestion') { + const cardElm = document.createElement('div'); + cardElm.className = 'assistant-suggestion-card'; + cardElm.innerHTML = ` +
+
+
+ Source + ${formatAssistantTime(entry.timestamp)} ${entry.source ? `• ${entry.source}` : '•'} +
+ `; + cardElm.querySelector('.assistant-suggestion-card__title').textContent = entry.title; + cardElm.querySelector('.assistant-suggestion-card__body').textContent = entry.suggestion; + aiAssistantContentElm.appendChild(cardElm); + } + }); + + if (state.listening) { + const listeningElm = document.createElement('div'); + listeningElm.className = 'assistant-listening'; + listeningElm.innerHTML = ` + + + + Listening for information + `; + aiAssistantContentElm.appendChild(listeningElm); + } + + if (state.error) { + const errorElm = document.createElement('div'); + errorElm.className = 'assistant-error'; + errorElm.textContent = state.error; + aiAssistantContentElm.appendChild(errorElm); + } +} + +async function requestSuggestedResponse() { + if (!currentTask || !webex?.cc?.apiAIAssistant) { + return; + } + + const interactionId = currentTask.data.interactionId; + const state = getAiAssistantState(interactionId); + const context = aiAssistantContextInputElm?.value?.trim(); + const actionTimeStamp = Date.now(); + + state.error = ''; + state.listening = true; + + if (context) { + state.entries.push({ + type: 'request', + text: context, + timestamp: actionTimeStamp, + }); + } + + trimAiAssistantEntries(state); + renderAiAssistantPanel(); + + try { + await webex.cc.apiAIAssistant.getSuggestedResponse({ + agentId, + interactionId, + actionTimeStamp, + ...(context ? {context} : {}), + }); + aiAssistantContextInputElm.value = ''; + } catch (error) { + state.listening = false; + state.error = error?.message || 'Unable to get AI assistance.'; + renderAiAssistantPanel(); + } +} + if (liveTranscriptTabElm) { liveTranscriptTabElm.addEventListener('click', () => setTranscriptTab('live')); } @@ -218,6 +477,27 @@ if (clearTranscriptsButton) { }); } +if (aiAssistantActionBtn) { + aiAssistantActionBtn.addEventListener('click', requestSuggestedResponse); +} + +if (aiAssistantContextBtn) { + aiAssistantContextBtn.addEventListener('click', requestSuggestedResponse); +} + +if (aiAssistantContextInputElm) { + aiAssistantContextInputElm.addEventListener('keydown', (event) => { + if (event.key !== 'Enter') { + return; + } + + event.preventDefault(); + requestSuggestedResponse(); + }); +} + +renderAiAssistantPanel(); + function isIncomingTask(task, agentId) { const taskData = task?.data; const taskState = taskData?.interaction?.state; @@ -1299,11 +1579,46 @@ function isTaskLegOnHold(task, leg = 'main') { // Register task listeners function registerTaskListeners(task) { + if (!task || registeredTaskListeners.has(task)) { + return; + } + + registeredTaskListeners.add(task); + task.on('REAL_TIME_TRANSCRIPTION', (payload) => { console.info('Received real-time transcription:', payload); appendRealtimeTranscript(payload); }); + task.on('SUGGESTED_RESPONSE', (payload) => { + console.info('Received suggested response:', payload); + + const entry = normalizeSuggestedResponse(payload); + if (!entry) { + return; + } + + const interactionId = task?.data?.interactionId; + const state = getAiAssistantState(interactionId); + const existingIndex = state.entries.findIndex( + (stateEntry) => stateEntry.type === entry.type && stateEntry.id === entry.id + ); + + state.error = ''; + + if (existingIndex >= 0) { + state.entries.splice(existingIndex, 1, entry); + } else { + state.entries.push(entry); + } + + trimAiAssistantEntries(state); + + if (currentTask?.data?.interactionId === interactionId) { + renderAiAssistantPanel(); + } + }); + task.on('task:assigned', (task) => { updateTaskList(); // Update the task list UI to have latest tasks console.info('Call has been accepted for task: ', task.data.interactionId); @@ -1475,6 +1790,9 @@ function registerTaskListeners(task) { // Clean up task creation time tracking taskCreationTimes.delete(task.data.interactionId); + const aiAssistantState = getAiAssistantState(task.data.interactionId); + aiAssistantState.listening = false; + aiAssistantState.error = ''; // If this is the current task, clear all controls if (currentTask && currentTask.data.interactionId === task.data.interactionId) { @@ -1485,6 +1803,7 @@ function registerTaskListeners(task) { // Clear currentTask since task has ended currentTask = undefined; + renderAiAssistantPanel(); } updateTaskList(); }); @@ -3023,6 +3342,7 @@ function handleTaskSelect(task) { engageElm.style.height = "100px" const chatAndSocial = ['chat', 'social']; currentTask = task + renderAiAssistantPanel(); if (chatAndSocial.includes(task.data.interaction.mediaType) && isBundleLoaded && !task.data.wrapUpRequired) { loadChatWidget(task); } else if (task.data.interaction.mediaType === 'email' && isBundleLoaded && !task.data.wrapUpRequired) { diff --git a/docs/samples/contact-center/index.html b/docs/samples/contact-center/index.html index 75f5c81dc39..2c5062c9ba4 100644 --- a/docs/samples/contact-center/index.html +++ b/docs/samples/contact-center/index.html @@ -342,6 +342,35 @@

+
+
+ Cisco AI Assistant +
+
+
+ +
+
+ + +
+
I can make mistakes, so check my responses.
+
+
+
diff --git a/docs/samples/contact-center/style.css b/docs/samples/contact-center/style.css index 80359055f35..8569f1e4990 100644 --- a/docs/samples/contact-center/style.css +++ b/docs/samples/contact-center/style.css @@ -211,10 +211,6 @@ button.btn-code { justify-content: space-between; } -.multistream-buttons select { - /* margin: 0 0.5rem; */ -} - .stage { min-height: 30rem; } @@ -658,6 +654,197 @@ legend { padding: 4px 0; } +.assistant-card { + background: #fff; + border: 1px solid #e6e6e6; + border-radius: 8px; + padding: 16px; +} + +.assistant-content { + display: flex; + flex-direction: column; + gap: 14px; + min-height: 220px; +} + +.assistant-intro { + align-items: center; + display: flex; + gap: 12px; +} + +.assistant-logo { + align-items: center; + background: linear-gradient(135deg, #00a1ff, #27c46b); + border-radius: 50%; + color: #fff; + display: flex; + flex: 0 0 36px; + font-size: 13px; + font-weight: 700; + height: 36px; + justify-content: center; + width: 36px; +} + +.assistant-intro__title, +.assistant-customer-query__label { + color: #111827; + font-size: 16px; + font-weight: 700; + line-height: 1.35; +} + +.assistant-empty-state { + color: #6b7280; + font-size: 14px; + line-height: 1.4; +} + +.assistant-request { + align-self: flex-end; + background: #111; + border-radius: 12px; + color: #fff; + max-width: 85%; + padding: 10px 14px; +} + +.assistant-customer-query { + display: flex; + flex-direction: column; + gap: 8px; +} + +.assistant-customer-query__body { + background: #fff; + border: 1px solid #8ec5ff; + border-radius: 12px; + color: #111827; + line-height: 1.5; + padding: 12px 14px; + white-space: pre-wrap; +} + +.assistant-suggestion-card { + background: #f3f4f6; + border-radius: 12px; + padding: 14px; +} + +.assistant-suggestion-card__title { + color: #111827; + font-size: 15px; + font-weight: 700; + line-height: 1.35; + margin-bottom: 10px; +} + +.assistant-suggestion-card__body { + color: #1f2937; + line-height: 1.5; + white-space: pre-wrap; +} + +.assistant-suggestion-card__meta { + align-items: center; + border-top: 1px solid #d1d5db; + color: #4b5563; + display: flex; + font-size: 12px; + justify-content: space-between; + margin-top: 12px; + padding-top: 10px; +} + +.assistant-listening { + align-items: center; + color: #111827; + display: flex; + font-size: 15px; + font-weight: 600; + gap: 10px; +} + +.assistant-listening__dots { + display: inline-flex; + gap: 4px; +} + +.assistant-listening__dots span { + animation: assistant-pulse 1.2s infinite ease-in-out; + background: #2fb3ff; + border-radius: 50%; + display: inline-block; + height: 8px; + width: 8px; +} + +.assistant-listening__dots span:nth-child(2) { + animation-delay: 0.2s; +} + +.assistant-actions { + display: flex; + justify-content: flex-end; + margin-top: 12px; +} + +.assistant-action-btn { + border-radius: 999px; + min-width: 180px; +} + +.assistant-context-input { + border: 1px solid #d1d5db; + border-radius: 8px; + box-sizing: border-box; + margin-top: 12px; + padding: 12px 14px; + width: 100%; +} + +.assistant-context-row { + align-items: center; + display: flex; + gap: 10px; + margin-top: 12px; +} + +.assistant-context-row .assistant-context-input { + margin-top: 0; +} + +.assistant-context-btn { + flex: 0 0 auto; + min-width: 120px; +} + +.assistant-footnote { + color: #6b7280; + font-size: 12px; + margin-top: 12px; + text-align: center; +} + +.assistant-error { + color: #b42318; + font-size: 13px; +} + +@keyframes assistant-pulse { + 0%, 80%, 100% { + opacity: 0.35; + transform: scale(0.9); + } + + 40% { + opacity: 1; + transform: scale(1); + } +} + .task-controls-cards { display: flex; flex-wrap: wrap;