Skip to content

refactor(openai-base): rename, adopt openai SDK, decouple ai-openrouter#545

Merged
tombeckenham merged 49 commits into
mainfrom
543-migrate-ai-groq-ai-openrouter-ai-ollama-to-openai-base-+-parameterize-the-base-for-sdk-shape-variance
May 14, 2026
Merged

refactor(openai-base): rename, adopt openai SDK, decouple ai-openrouter#545
tombeckenham merged 49 commits into
mainfrom
543-migrate-ai-groq-ai-openrouter-ai-ollama-to-openai-base-+-parameterize-the-base-for-sdk-shape-variance

Conversation

@tombeckenham
Copy link
Copy Markdown
Contributor

@tombeckenham tombeckenham commented May 11, 2026

🎯 Changes

This PR consolidates the OpenAI-compatible provider stack and rewires ai-groq and ai-openrouter. The work landed in three arcs over many commits — the shape changed mid-flight, so the headline below describes the final state rather than the history.

Final state

@tanstack/openai-base (renamed from the experimental @tanstack/ai-openai-compatible / @tanstack/openai-compatible names that were tried mid-PR) is now a thin shim over the openai SDK:

  • Owns the AG-UI lifecycle pipeline for both Chat Completions and Responses APIs (~1k LOC of stream accumulation, partial-JSON tool-call buffering, RUN_ERROR taxonomy, structured-output coercion).
  • Imports wire-format types from openai/resources/* directly — no vendored types.
  • Constructor takes a pre-built OpenAI client; the base calls client.chat.completions.create / client.responses.create itself. No more abstract callChatCompletion* / callResponse* hooks.
  • Subclasses (ai-openai, ai-grok, ai-groq) just construct the SDK with their provider-specific baseURL + headers and pass it to super.

ai-groq is migrated off the groq-sdk package. It now uses the OpenAI SDK pointed at https://api.groq.com/openai/v1 (same pattern as ai-grok against xAI). Groq-specific quirks preserved via overridable hooks:

  • processStreamChunks promotes chunk.x_groq.usagechunk.usage so the base's RUN_FINISHED accounting works.
  • makeStructuredOutputCompatible applies Groq's strict-mode schema quirks.

ai-openrouter is fully decoupled from openai-base. The earlier pass migrated it onto the base via SDK-call hooks + a snake_case ↔ camelCase round-trip; the final pass undoes that because OpenRouter's SDK shape was always native camelCase and the round-trip was pure friction:

  • Two standalone classes (OpenRouterTextAdapter, OpenRouterResponsesTextAdapter) extend BaseTextAdapter directly.
  • Stream processors duplicated locally and rewritten to read OpenRouter's camelCase types natively.
  • ~300 LOC of toOpenRouterRequest / toChatCompletion / adaptOpenRouterStreamChunks / toSnakeResponseResult / etc. deleted.
  • No @tanstack/openai-base, no openai SDK dep — only @openrouter/sdk, @tanstack/ai, @tanstack/ai-utils.

New: OpenRouter Responses (beta) adapter (openRouterResponsesText, createOpenRouterResponsesText). OpenRouter's /v1/responses endpoint fans out to Anthropic Claude, Google Gemini, etc. — the adapter exposes that surface alongside the existing chat-completions adapter.

Other improvements:

  • Summarize adapters across providers unified on the chat-stream wrapper (chat-stream-summarize).
  • @tanstack/ai normalizes abort-shaped errors (AbortError, APIUserAbortError, RequestAbortedError) to a stable { message: 'Request aborted', code: 'aborted' } payload in toRunErrorPayload so consumers can discriminate user-cancellation from other failures.
  • StreamChunk emissions across adapters now use satisfies StreamChunk instead of asChunk casts — drift surfaces at compile time.

Bug fixes captured along the way

openai-base (the shared pipeline):

  • structuredOutput throws a distinct "response contained no content" error rather than letting empty content cascade into a misleading JSON-parse error.
  • Post-loop tool-args drain block now logs malformed JSON via logger.errors, matching the in-loop finish_reason path so truncated streams emitting partial tool args are debuggable instead of silently invoking the tool with {}.
  • Stop processing chunks after a top-level error event so in-flight TEXT_MESSAGE_CONTENT / TOOL_CALL_* events don't leak past the terminal RUN_ERROR.
  • Responses.structuredOutput routes through the transformStructuredOutput hook so subclasses that opt out of null-stripping (OpenRouter) don't have to fork the whole method.

ai-openrouter:

  • stream_options.include_usage correctly camelCased to includeUsage so streaming RUN_FINISHED.usage is populated (was silently dropped by the SDK Zod schema).
  • chunk.error.code stringified on mid-stream errors so provider error codes (401, 429, 500, …) survive the toRunErrorPayload narrow.
  • Assistant toolCalls[].function.arguments stringified to match the SDK's string contract.
  • convertMessage mirrors the base's fail-loud guards (throws on empty user content and unsupported content parts) instead of silently sending empty paid requests.
  • Image data URIs default to application/octet-stream when the source has no MIME type (was producing invalid data:undefined;base64,... URIs).
  • Audio URLs route to text fallback on chat-completions (the wire format has no URL variant for input_audio); inline document data on chat-completions throws (no input_file shape there) — both with explicit guidance to use the Responses adapter.
  • Array-shaped tool message content has its text extracted rather than JSON-stringified (was feeding the literal ContentPart shape back to the model).
  • Speakeasy's { raw, type: 'UNKNOWN', isUnknown: true } discriminated-union fallback events pass through verbatim — Responses streams from upstreams that omit sequence_number / logprobs fields no longer get dropped.

ai-groq:

  • ChatCompletionNamedToolChoice shape corrected.
  • Spurious timestamp field removed from processStreamChunks override.
  • pendingMockCreate reset between tests to prevent cross-test pollution.

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Changeset: .changeset/decouple-openrouter-collapse-openai-base.md (covers @tanstack/openai-base minor + ai-openai / ai-grok / ai-groq / ai-openrouter patches).

🤖 Generated with Claude Code

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 11, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a4787c3b-83ad-4d5e-887a-19fd21d54af1

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 543-migrate-ai-groq-ai-openrouter-ai-ollama-to-openai-base-+-parameterize-the-base-for-sdk-shape-variance

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 11, 2026

🚀 Changeset Version Preview

9 package(s) bumped directly, 22 bumped as dependents.

🟨 Minor bumps

Package Version Reason
@tanstack/openai-base 0.2.1 → 0.3.0 Changeset

🟩 Patch bumps

Package Version Reason
@tanstack/ai 0.16.0 → 0.16.1 Changeset
@tanstack/ai-anthropic 0.8.6 → 0.8.7 Changeset
@tanstack/ai-gemini 0.10.3 → 0.10.4 Changeset
@tanstack/ai-grok 0.7.3 → 0.7.4 Changeset
@tanstack/ai-groq 0.1.11 → 0.1.12 Changeset
@tanstack/ai-ollama 0.6.13 → 0.6.14 Changeset
@tanstack/ai-openai 0.8.5 → 0.8.6 Changeset
@tanstack/ai-openrouter 0.8.5 → 0.8.6 Changeset
@tanstack/ai-client 0.9.1 → 0.9.2 Dependent
@tanstack/ai-code-mode 0.1.10 → 0.1.11 Dependent
@tanstack/ai-code-mode-models-eval 0.0.15 → 0.0.16 Dependent
@tanstack/ai-code-mode-skills 0.1.10 → 0.1.11 Dependent
@tanstack/ai-devtools-core 0.3.27 → 0.3.28 Dependent
@tanstack/ai-event-client 0.3.0 → 0.3.1 Dependent
@tanstack/ai-fal 0.7.3 → 0.7.4 Dependent
@tanstack/ai-isolate-cloudflare 0.2.1 → 0.2.2 Dependent
@tanstack/ai-isolate-node 0.1.10 → 0.1.11 Dependent
@tanstack/ai-isolate-quickjs 0.1.10 → 0.1.11 Dependent
@tanstack/ai-preact 0.6.22 → 0.6.23 Dependent
@tanstack/ai-react 0.8.2 → 0.8.3 Dependent
@tanstack/ai-solid 0.7.2 → 0.7.3 Dependent
@tanstack/ai-svelte 0.7.2 → 0.7.3 Dependent
@tanstack/ai-vue 0.7.2 → 0.7.3 Dependent
@tanstack/ai-vue-ui 0.1.33 → 0.1.34 Dependent
@tanstack/preact-ai-devtools 0.1.31 → 0.1.32 Dependent
@tanstack/react-ai-devtools 0.2.31 → 0.2.32 Dependent
@tanstack/solid-ai-devtools 0.2.31 → 0.2.32 Dependent
ts-svelte-chat 0.1.41 → 0.1.42 Dependent
ts-vue-chat 0.1.41 → 0.1.42 Dependent
vanilla-chat 0.0.37 → 0.0.38 Dependent

@nx-cloud
Copy link
Copy Markdown

nx-cloud Bot commented May 11, 2026

View your CI Pipeline Execution ↗ for commit d205b23

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1m 10s View ↗

☁️ Nx Cloud last updated this comment at 2026-05-14 01:25:27 UTC

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented May 11, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@545

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@545

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@545

@tanstack/ai-code-mode

npm i https://pkg.pr.new/@tanstack/ai-code-mode@545

@tanstack/ai-code-mode-skills

npm i https://pkg.pr.new/@tanstack/ai-code-mode-skills@545

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@545

@tanstack/ai-elevenlabs

npm i https://pkg.pr.new/@tanstack/ai-elevenlabs@545

@tanstack/ai-event-client

npm i https://pkg.pr.new/@tanstack/ai-event-client@545

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@545

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@545

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@545

@tanstack/ai-groq

npm i https://pkg.pr.new/@tanstack/ai-groq@545

@tanstack/ai-isolate-cloudflare

npm i https://pkg.pr.new/@tanstack/ai-isolate-cloudflare@545

@tanstack/ai-isolate-node

npm i https://pkg.pr.new/@tanstack/ai-isolate-node@545

@tanstack/ai-isolate-quickjs

npm i https://pkg.pr.new/@tanstack/ai-isolate-quickjs@545

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@545

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@545

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@545

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@545

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@545

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@545

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@545

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@545

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@545

@tanstack/ai-utils

npm i https://pkg.pr.new/@tanstack/ai-utils@545

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@545

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@545

@tanstack/openai-base

npm i https://pkg.pr.new/@tanstack/openai-base@545

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@545

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@545

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@545

commit: d205b23

tombeckenham added a commit that referenced this pull request May 11, 2026
…ons migration

Addresses regressions and pre-existing silent failures surfaced by reviewing #545:

- `@tanstack/ai`: `toRunErrorPayload` normalizes `AbortError` / `APIUserAbortError` /
  `RequestAbortedError` to `{ code: 'aborted' }` so consumers can discriminate
  user-initiated cancellation without matching provider-specific message strings.
- `@tanstack/openai-base`: `structuredOutput` throws a distinct
  "response contained no content" error instead of cascading into a misleading
  JSON-parse error on an empty string; the post-loop tool-args drain now logs
  malformed JSON via `logger.errors` so truncated streams don't silently invoke
  tools with `{}`.
- `@tanstack/ai-openrouter`: `stream_options.include_usage` is camelCased to
  `includeUsage` (Zod was silently stripping it, leaving `RUN_FINISHED.usage`
  always undefined on streaming); mid-stream `chunk.error.code` is stringified
  so provider codes (401/429/500) survive `toRunErrorPayload`; assistant
  `toolCalls[].function.arguments` is stringified to match the SDK's `string`
  contract; `convertMessage` now mirrors the base's fail-loud guards (empty
  user content, unsupported content parts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@tombeckenham tombeckenham marked this pull request as ready for review May 12, 2026 04:51
@tombeckenham tombeckenham requested a review from AlemTuzlak May 12, 2026 04:51
tombeckenham added a commit that referenced this pull request May 12, 2026
#545's asChunk removal added \`threadId\` to RUN_STARTED/RUN_FINISHED on the
chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.
@tombeckenham tombeckenham changed the title refactor: migrate ai-groq + ai-openrouter onto @tanstack/openai-base (#543) refactor: migrate ai-groq + ai-openrouter onto @tanstack/ai-openai-compatible (#543) May 13, 2026
@tombeckenham
Copy link
Copy Markdown
Contributor Author

Follow-up: complete the openai-SDK decouple at the type level (commit e30a3ca)

Section §4 of the description said @tanstack/ai-openai-compatible lists openai "only as an optional peer." After re-checking with pnpm knip (which surfaces Referenced optional peerDependencies), that was still a half-measure: the runtime was decoupled but the type surface still leaked openai everywhere — abstract method signatures (OpenAI_SDK.Chat.Completions.* / OpenAI_SDK.Responses.*), all 12 tool-config aliases, and the public re-exports in src/index.ts. Net effect: installing @tanstack/ai-openrouter still pulled openai into the user's node_modules because their TypeScript needed it to resolve types coming through the upstream contract.

This commit finishes the decouple:

  • Hand-wrote minimal wire-format types in packages/typescript/ai-openai-compatible/src/types/{chat-completions,responses,tools}.ts — only the fields the base actually reads/writes, structurally compatible with the openai SDK's richer shapes.
  • Swapped every import type ... from 'openai' in the base adapters, tool converters, and 12 tool files to use the local types. Public re-exports in src/index.ts now source from ./types/*.
  • Dropped openai from @tanstack/ai-openai-compatible's peerDependencies, peerDependenciesMeta, and devDependencies (no test file imported it).
  • Dropped openai from @tanstack/ai-openrouter's devDependencies (its sole purpose was satisfying the leaked type contract — @openrouter/sdk doesn't depend on it at runtime).
  • Updated @tanstack/ai-openai's text adapter overrides (callResponse, callResponseStream, mapOptionsToRequest) to use the local protocol types and cast at the SDK boundary inside the body — the only way to keep variance compat with the now-local base while still calling this.client.responses.create().

Verified

  • pnpm test:types across ai-openai-compatible / ai-openai / ai-groq / ai-grok / ai-openrouter — clean
  • pnpm test:lib — 351 tests pass across the chain
  • pnpm knip — exit 0, the previous Referenced optional peerDependencies (1) openai warning is gone
  • pnpm test:sherif — no issues
  • pnpm test:build (publint --strict) — passes for all 5 packages
  • grep "from ['\"]openai" packages/typescript/ai-openai-compatible/dist packages/typescript/ai-openrouter/dist — zero matches in both

Net effect: end-users installing @tanstack/ai-openrouter (or any future protocol-compatible adapter that doesn't itself call the openai SDK at runtime) no longer pull openai into their dependency tree at all — not as a transitive dep, not as a peer warning, not as anything. The only packages that still list openai are ai-openai / ai-groq / ai-grok, which legitimately instantiate the OpenAI SDK as their HTTP client.

A follow-up changeset for this is still TODO (it's a notable behaviour improvement for ai-openrouter consumers worth calling out in release notes).

tombeckenham added a commit that referenced this pull request May 13, 2026
#545's asChunk removal added \`threadId\` to RUN_STARTED/RUN_FINISHED on the
chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.
@tombeckenham tombeckenham changed the title refactor: migrate ai-groq + ai-openrouter onto @tanstack/ai-openai-compatible (#543) refactor(openai-base): rename, adopt openai SDK, decouple ai-openrouter May 13, 2026
tombeckenham and others added 17 commits May 13, 2026 20:35
…543)

Adds protected `callChatCompletion`, `callChatCompletionStream`,
`extractReasoning`, and `transformStructuredOutput` hooks to
`OpenAICompatibleChatCompletionsTextAdapter` so providers with non-OpenAI
SDK shapes can reuse the shared stream accumulator, partial-JSON tool-call
buffer, RUN_ERROR taxonomy, and lifecycle gates. ai-groq drops `groq-sdk`
in favour of the OpenAI SDK pointed at api.groq.com/openai/v1; ai-openrouter
keeps `@openrouter/sdk` via hook overrides. ai-ollama remains on
BaseTextAdapter (native API has a different wire format).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ons migration

Addresses regressions and pre-existing silent failures surfaced by reviewing #545:

- `@tanstack/ai`: `toRunErrorPayload` normalizes `AbortError` / `APIUserAbortError` /
  `RequestAbortedError` to `{ code: 'aborted' }` so consumers can discriminate
  user-initiated cancellation without matching provider-specific message strings.
- `@tanstack/openai-base`: `structuredOutput` throws a distinct
  "response contained no content" error instead of cascading into a misleading
  JSON-parse error on an empty string; the post-loop tool-args drain now logs
  malformed JSON via `logger.errors` so truncated streams don't silently invoke
  tools with `{}`.
- `@tanstack/ai-openrouter`: `stream_options.include_usage` is camelCased to
  `includeUsage` (Zod was silently stripping it, leaving `RUN_FINISHED.usage`
  always undefined on streaming); mid-stream `chunk.error.code` is stringified
  so provider codes (401/429/500) survive `toRunErrorPayload`; assistant
  `toolCalls[].function.arguments` is stringified to match the SDK's `string`
  contract; `convertMessage` now mirrors the base's fail-loud guards (empty
  user content, unsupported content parts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds OpenRouterResponsesTextAdapter on top of @tanstack/openai-base's
responses-text base, mirroring the chat-completions migration in #543.

- openai-base: protected `callResponse` / `callResponseStream` hooks on
  OpenAICompatibleResponsesTextAdapter parallel to the existing
  `callChatCompletion*` hooks, so providers whose SDK has a different call
  shape can override without forking processStreamChunks. Re-exports the
  OpenAI Responses SDK types subclasses need.
- ai-openrouter: new OpenRouterResponsesTextAdapter routing through
  `client.beta.responses.send({ responsesRequest })`. Emits the SDK's
  camelCase TS shape directly via overrides of convertMessagesToInput /
  convertContentPartToInput / mapOptionsToRequest, annotated with
  `Pick<ResponsesRequest, ...>` so future SDK field renames break the build
  instead of silently producing Zod-stripped wire payloads. Bridges
  inbound stream events camel -> snake so the base's processStreamChunks
  reads documented fields unchanged.
- Function tools only in v1; webSearchTool() throws with a clear error
  pointing at the chat-completions adapter.
- Folds in the silent-failure lessons from 0171b18 (stringified error
  codes, stringified tool-call arguments, fail-loud on empty user content).
- E2E: new `openrouter-responses` provider slot in feature-support /
  test-matrix / providers / types / api.summarize, reusing aimock's
  native `/v1/responses` handler.
- 10 new unit tests covering request mapping (snake -> camel for top-level
  fields, function-call camelCasing in input[], variant suffix),
  stream-event bridge (text deltas, function-call lifecycle,
  response.failed, top-level error code stringification),
  webSearchTool() rejection, and SDK constructor wiring.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Removes `validateTextProviderOptions` (no-op stub never called) and the
chain of `ChatCompletion*MessageParam` / `ChatCompletionContentPart*` /
`ChatCompletionMessageToolCall` types that were only referenced by it.
Unblocks the root `test:knip` CI check.

None of the removed exports are re-exported from the package's public
`src/index.ts`, so this is internal-only cleanup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The OpenRouter SDK's stream-event schema is built with Speakeasy's
discriminated-union helper, which on a per-variant parse failure falls
back to `{ raw, type: 'UNKNOWN', isUnknown: true }` rather than throwing.
This happens whenever an upstream omits an "optional-looking" required
field — notably `sequence_number` and `logprobs` on text/reasoning delta
events, which aimock-served fixtures don't include.

Before this fix the adapter's switch hit the default branch for UNKNOWN
events and emitted them with no usable `type`, so the base's
processStreamChunks ignored them silently — the run terminated as
`RUN_FINISHED { finishReason: 'stop' }` with no content.

The `raw` payload preserved on the fallback is the original wire-shape
event in snake_case, which is exactly what processStreamChunks reads.
Re-emit it verbatim. Real-OpenRouter responses still flow through the
existing camel -> snake bridge because their events include the required
fields and parse cleanly.

Unblocks the openrouter-responses E2E suite: 11 affected tests now pass
locally against aimock; before this commit they all timed out empty.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replaces ~200 sites of `asChunk({ type: 'X', ... })` (a `Record<string,
unknown> as unknown as StreamChunk` cast) with `({ type: EventType.X, ... })
satisfies StreamChunk` so the type system validates AG-UI event shape at
every emission. The cast was bypassing TypeScript's string-enum nominal
typing and masking a cluster of spec deviations now fixed:

- RUN_STARTED / RUN_FINISHED in openai-base (chat-completions + responses)
  and all three summarize adapters were missing the AG-UI-required
  `threadId`. Threading `options.threadId ?? generateId(this.name)` through
  `aguiState` (matching the existing Gemini/Anthropic pattern) fixes it.
- RUN_ERROR emissions carried a non-existent `runId` field and the
  deprecated nested `error: { message, code }` form instead of AG-UI's
  top-level `message`/`code`. Both forms now coexist (deprecated kept for
  back-compat) and `runId` is dropped — verified no consumer reads it
  (chat-client.ts:404 only reads runId on RUN_FINISHED).
- STEP_STARTED / STEP_FINISHED in responses-text.ts were passing only the
  deprecated `stepId` alias; AG-UI requires `stepName`. Now passes both.
- `finishReason` in chat-completions-text.ts was typed as `string`,
  dropping below the AG-UI vocabulary. Widened `RunFinishedEvent.finishReason`
  in `@tanstack/ai` to include OpenAI's `'function_call'` so it narrows
  cleanly. responses-text.ts maps Responses-API `'max_output_tokens'` →
  `'length'` and passes `'content_filter'` through.
- Per-event timestamps. AG-UI spec: "Optional timestamp indicating when
  the event was created." Previously a single `const timestamp = Date.now()`
  was captured at run start and reused on every emission across the eight
  adapters; each chunk now uses `Date.now()` inline.

`@tanstack/ai/tests/test-utils.ts` `ev.*` builders are typed to return
precise event members via `satisfies StreamChunk`; the loose `chunk(type,
fields)` factory is preserved as a documented escape hatch for tests that
deliberately construct off-spec fixtures. ai-client tests no longer declare
a local `asChunk`. ai-groq's `processStreamChunks` override signature is
updated to include the new `threadId` field on `aguiState`.

Out of scope, flagged for follow-up:
- Framework tests (ai-react / ai-svelte / ai-vue) with inline string-literal
  chunk arrays — their test directories aren't currently type-checked, so
  they compile despite being off-spec.
- Summarize adapters omit TEXT_MESSAGE_START / TEXT_MESSAGE_END around
  content emissions (separate AG-UI lifecycle gap).

Verified: pnpm -r test:types, test:lib, test:eslint, test:build all green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The chat adapter's convertMessage JSON-stringified Array<ContentPart>
assistant content (so a multi-part assistant turn would round-trip as
the literal JSON of the parts instead of joined text) and emitted
`content: undefined` for tool-call-only assistants where the OpenAI
Chat Completions contract documents `null`. Use the base's
extractTextContent + emit `null` for the tool-call-only case so the
override matches the chat-completions base.

The Responses adapter's convertMessagesToInput tool branch had the
same shape — JSON.stringify(message.content) fed the raw ContentPart
shape into function_call_output.output for structured tool results.
Use extractTextContent there too.

Regression tests assert (a) array-shaped assistant content extracts to
joined text rather than JSON, and (b) tool-call-only assistant content
emits `null` rather than `undefined`.
The interface declared a single capitalized `Function` key with no
`type` discriminator. The OpenAI / Groq Chat Completions wire format
for a named tool_choice is `{ type: 'function', function: { name } }`.
Construct a literal against the old type and the SDK's Zod schema
would either reject it or treat tool_choice as unset.

No production code constructs this type literally yet — only the
`ChatCompletionToolChoiceOption` union in the same file uses it — so
fixing the shape now is a no-op at runtime but locks the type to the
correct contract going forward.
The module-level pendingMockCreate is only cleared inside
applyPendingMock when a factory call consumes it. Tests in the first
describe block instantiate the adapter without calling
setupMockSdkClient first, so a leaked value from a prior test would
inject a stale mock into a later adapter. Reset in beforeEach for
deterministic ordering regardless of test-runner permutation.
The feature-support matrix advertises summarize / summarize-stream for
both `openrouter` and `openrouter-responses`, but the factories
silently substituted `createOpenaiSummarize` against the OpenAI base
URL — exercising the OpenAI adapter while reporting OpenRouter
coverage. Wire `createOpenRouterSummarize` (a thin wrapper over the
OpenRouter chat adapter, used for both rows since the summarize
endpoint is chat-completions-only) against the LLMOCK base so the
matrix's claim is actually verified.
Sibling adapters (`ai-openai`, `ai-groq`, `ai-grok`) all declare zod
as a peerDependency so a consumer that passes a Zod tool schema gets
a single zod instance shared with this adapter. Without the peerDep,
strict installs (pnpm `strict-peer-dependencies`, yarn berry pnp) can
end up with two zod copies — one transitive via `@openrouter/sdk` or
`@tanstack/ai`, one direct — and `instanceof ZodType` checks then
fail across the boundary.
…override

The Groq subclass declared its aguiState parameter with an extra
`timestamp: number` field that does not exist on the base class's
aguiState type. TypeScript's bivariant method-parameter checks let
the wider type pass typecheck, but at runtime the body never reads
`timestamp` and the field is never populated by the base, so any
caller (or future override) that relied on the declared shape would
observe `undefined`. Realign the override's parameter type with the
base.
The chunk-level 'error' branch in adaptOpenRouterResponsesStreamEvents
already stringifies provider codes so they survive
toRunErrorPayload's string-only code filter, but the parallel
response.failed / response.incomplete path went through
toSnakeResponseResult which forwarded `r.error.code` raw. A provider
that returned a numeric code (401/429/500/…) on a terminal failure
event would lose it on the way through to RUN_ERROR.

Mirror the chunk-level stringification inside toSnakeResponseResult
and add a regression test for response.failed with a numeric
error.code.
When a base64 image source has no mimeType the override produced a
literal `data:undefined;base64,...` URI that the upstream rejects as
invalid. The chat-completions base defaults to
`application/octet-stream` for exactly this case; mirror the same
defaulting in the OpenRouter convertContentPart override. Regression
test asserts the data URI no longer contains the literal `undefined`.
The Responses adapter's processStreamChunks marked `runFinishedEmitted`
on a top-level chunk.type === 'error' to prevent the synthetic
terminal block from firing, but it did not return from the for-await
loop. Any subsequent chunks the upstream delivered after a terminal
error event (a stray output_text.delta, an output_item.done, etc.)
would continue to emit lifecycle events past RUN_ERROR, violating the
'RUN_ERROR is terminal' contract.

Mirror the response.failed / response.incomplete branches above:
return after yielding RUN_ERROR. Regression test covers the case
where the upstream continues delivering chunks after a top-level
error event and asserts no further chunks reach the consumer.
…ough transformStructuredOutput hook

The Responses base hard-coded transformNullsToUndefined on parsed
structured-output JSON, leaving no hook for subclasses to opt out.
The changeset's promise of 'transformStructuredOutput for subclasses
(like OpenRouter) that preserve nulls in structured output instead
of converting them to undefined' was therefore only fulfilled on
the chat-completions surface — the matching Responses adapter would
silently strip nulls regardless of provider intent.

Add the transformStructuredOutput protected hook on
OpenAICompatibleResponsesTextAdapter mirroring the chat-completions
base's design, and override it as a no-op on
OpenRouterResponsesTextAdapter so OpenRouter callers see null
sentinels round-trip identically across the two adapter surfaces.

Regression test asserts a structuredOutput response containing
`nickname: null` round-trips as null (not undefined) through the
Responses adapter.
tombeckenham added a commit that referenced this pull request May 13, 2026
#545's asChunk removal added \`threadId\` to RUN_STARTED/RUN_FINISHED on the
chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.
@tombeckenham
Copy link
Copy Markdown
Contributor Author

Heads-up: the latest push reverts the package name back to @tanstack/openai-base.

Earlier in this PR we tried two intermediate names (@tanstack/openai-compatible, then @tanstack/ai-openai-compatible) to signal the multi-vendor protocol surface. Now that ai-openrouter has been decoupled entirely (it extends BaseTextAdapter directly with its own duplicated stream processors), the only remaining consumers — ai-openai, ai-grok, ai-groq — all back onto the openai SDK with a different baseURL. "openai-base" describes that role accurately and matches the package's original name.

Imports for downstream consumers:

- import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
+ import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'

@tanstack/ai-openai-compatible@0.2.x and @tanstack/openai-compatible@* remain published for any pinned-lockfile consumers but will receive no further updates.

Comment thread packages/typescript/ai/src/types.ts Outdated
Comment thread packages/typescript/ai-openai/src/adapters/transcription.ts
Comment thread packages/typescript/ai-openai/src/adapters/transcription.ts
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/package.json Outdated
AlemTuzlak and others added 4 commits May 13, 2026 19:49
Apply review feedback from PR #545:

- Restore JSDoc removed during the openai-base media/summarize refactor
  (26 blocks across ai-openai, ai-grok, ai-anthropic, ai-gemini,
  ai-openrouter adapters). Only restore where the documented symbol still
  exists post-refactor; skip JSDoc tied to removed classes / provider-
  options interfaces.
- Drop `as` casts on stream chunks in ai-openrouter (responses-text.ts
  output_item.{added,done} handlers, response.completed handler) by typing
  `NormalizedStreamEvent.item` as the SDK's `OutputItems` discriminated
  union and `.response` as `Partial<OpenResponsesResult>`. Discriminated-
  union narrowing now works without bypass.
- Drop request-builder casts in ai-openrouter/{text,responses-text}.ts:
  `as InputsItem`, `as ChatMessages`, `as ChatContentItems`,
  `as ResponsesRequest['tools' | 'text' | 'input']`,
  `as Omit<ChatRequest, 'stream'>`, `as Record<string, any>` on
  modelOptions spread.
- Drop SDK-return casts `as AsyncIterable<StreamEvents>` /
  `as AsyncIterable<ChatStreamChunk>` — `EventStream<T>` already is
  `AsyncIterable<T>`.
- Drop `tool as Tool` in the webSearchTool guard — `Tool<any, any, any>`
  is assignable to `Tool` directly.
- Remove `'function_call'` from RunFinishedEvent.finishReason union.
  Normalize OpenAI's legacy v1 function_call termination to `tool_calls`
  inside chat-completions-text — the SDK-vocabulary value no longer leaks
  into the public AG-UI type.
- Drop redundant `satisfies StreamChunk` from yield/array-element sites
  across adapters and ai-client tests. The contextual type from
  `AsyncIterable<StreamChunk>` / `Array<StreamChunk>` already validates
  every emission; the suffix added no extra safety.
- Annotate the `ev.*` builders in ai/tests/test-utils.ts with explicit
  return types (RunStartedEvent, TextMessageStartEvent, …) instead of
  `satisfies StreamChunk`. Each builder now returns the precise event
  variant rather than the wide union.
- Drop zod from ai-openrouter peerDependencies — no source imports zod;
  it's only used in tests, where it stays as a devDep. (OpenRouter SDK
  already declares zod as a regular dep, so transitive consumers aren't
  affected.)
- Clean up mid-PR rename leftovers: stale "openai-compatible adapters"
  jsdoc in ai-openai/utils/client.ts, and `'openai-compatible'` /
  `'openai-compatible-responses'` default-name strings in the
  openai-base test subclasses (now `openai-base` / `openai-base-responses`).
Extend the no-`as`-on-chunks principle (PR #545 review) to five sibling
sites missed by 44db925:

- `response.created/in_progress/incomplete/failed` model + error/incomplete
  capture (lines 462, 491): `NormalizedStreamEvent.response` is already
  `Partial<OpenResponsesResult>`, so the duck-type casts were redundant.
  Read `chunk.response?.{model,error,incompleteDetails}` directly.
- `response.content_part.{added,done}` (lines 629, 673): type
  `NormalizedStreamEvent.part` as the SDK's `ContentPartAddedEventPart`
  discriminated union (`ResponseOutputText | ReasoningTextContent |
  OpenAIResponsesRefusalContent | Unknown<'type'>`) and switch
  `handleContentPart` to narrow on `part.type`. The previous `text?` /
  `refusal?` duck-type allowed unsafe access on unknown parts.
- `response.completed` `outputItems.some(item.type === 'function_call')`
  (line 998): the array element type is already `OutputItems`, line 921
  above already narrows without a cast — leftover.

Behaviourally identical; verified by openrouter unit tests (80/80) and
e2e suite (30/30).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…known>

Unify the generic constraint and default across the summarize surface:

- `SummarizationOptions`: `extends object = Record<string, any>` →
  `extends Record<string, unknown> = Record<string, unknown>`
- `SummarizeAdapter` / `BaseSummarizeAdapter`: constraint tightened from
  `extends object` to `extends Record<string, unknown>` (default was
  already `Record<string, unknown>`)
- `ChatStreamSummarizeAdapter`: `extends object = Record<string, any>` →
  `extends Record<string, unknown> = Record<string, unknown>`
- `activities/summarize/index.ts` instantiation sites: literal
  `<string, object>` → `<string, Record<string, unknown>>`

Removes the three-way default split (`object` / `Record<string, any>` /
`Record<string, unknown>`) that lived inside the summarize folder, and
forces unparameterised consumers to narrow before indexed access.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@tombeckenham tombeckenham requested a review from AlemTuzlak May 13, 2026 23:29
autofix-ci Bot and others added 5 commits May 13, 2026 23:30
README:
- Drop the broken "Renamed from" note (referenced an outdated state).
- Drop the Vercel `@ai-sdk/openai-compatible` industry-term paragraph and
  the surrounding "Why this package exists" rationale that explained the
  prior rename — package is back to `openai-base`, that history is moot.
- Reframe TL;DR around the actual current contract: "providers that drive
  the official `openai` SDK against a different `baseURL`" (only
  ai-openai, ai-grok, ai-groq remain on the base after this PR).
- Remove ai-openrouter from subclass lists and the architecture diagram —
  it was decoupled in this PR and now extends `BaseTextAdapter` directly.
- Rewrite the hooks section: the old `callChatCompletion(Stream)` /
  `callResponse(Stream)` abstract methods were removed in 7aff8b1; the
  base now takes a pre-built `OpenAI` client and calls
  `client.chat.completions.create` / `client.responses.create` itself.
  Document `convertMessage`, `mapOptionsToRequest`, `extractReasoning`,
  `transformStructuredOutput`, `makeStructuredOutputCompatible`,
  `processStreamChunks`, `extractTextFromResponse` as the real surface.
- Update "build a new provider" example to point at ai-grok / ai-groq.

Changesets:
- Replace the narrow `summarize-tighten-provider-options-generic.md`
  (which only covered 6d99fad) with a comprehensive
  `summarize-unify-on-chat-stream-wrapper.md` that also covers e0dcb77
  (provider summarize unification on `ChatStreamSummarizeAdapter`,
  `modelOptions` plumbing fix in the activity layer, new
  `InferTextProviderOptions<TAdapter>` helper, and removal of the
  bespoke `*SummarizeProviderOptions` interfaces from 6 provider
  packages). Adds patch bumps for ai-anthropic / ai-gemini / ai-ollama
  which were previously uncovered.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
6d99fad tightened the constraint from `extends object` to `extends
Record<string, unknown>` alongside aligning the default. The default
change was correct; the constraint change broke vite build / DTS emit
for ai-openai, ai-anthropic, ai-gemini, ai-grok, ai-ollama. Their
summarize factories instantiate `ChatStreamSummarizeAdapter<TModel,
InferTextProviderOptions<XTextAdapter<TModel>>>`, and the inferred
per-model option shapes (`OpenAIBaseOptions & OpenAIReasoningOptions &
...` etc.) are typed interfaces with named optional fields and no
string index signature — TS won't assign them to
`Record<string, unknown>`.

Revert just the constraint to `extends object`, keep the default at
`Record<string, unknown>`. Restores the pattern `BaseSummarizeAdapter`
already had on main, now applied uniformly across all four
declarations. The 7 activity-layer `<string, Record<string, unknown>>`
instantiations in summarize/index.ts revert to `<string, object>`, and
the two `summarizeOptions: SummarizationOptions = {...}` literals are
explicitly annotated `SummarizationOptions<object>` so the
modelOptions: object | undefined destructured from the activity-layer
options assigns correctly.

Changeset paragraph 5 amended to describe what actually shipped
(default-aligned, constraint preserved).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Let TS infer summarizeOptions from the literal in runSummarize /
runStreamingSummarize. The contextual check happens at the
adapter.summarize(...) / adapter.summarizeStream(...) call site against
the adapter's own typed signature, which is sufficient — the explicit
local annotation was just visual noise. Drops the unused
SummarizationOptions import too.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
tombeckenham added a commit that referenced this pull request May 14, 2026
chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.
tombeckenham and others added 2 commits May 14, 2026 11:22
…t data

Follow-up to the cast removals: where the old `as unknown as StreamChunk`
casts were hiding real data-shape issues, fix the data instead of
re-introducing the bypass.

Source:
- ai-client/src/connection-adapters.ts: synth RUN_FINISHED chunk now
  includes `threadId` (the cast had been hiding the missing required
  field). Use `EventType.RUN_FINISHED` / `EventType.RUN_ERROR` literals.

Test helpers (`chunk()` / `makeChunk()` / `sc()`):
- Replace string-typed `(type: string, fields) => StreamChunk` (which
  needed `as unknown as StreamChunk` to lie) with a generic
  `<T extends StreamChunk['type']>(type: T, fields?) =>
   Extract<StreamChunk, { type: T }>`. One typed cast remains inside
  each helper at the boundary; no `as unknown` casts.
- `sc()` retyped as a typed identity (`<T extends StreamChunk>(c: T) => T`)
  so inline literal narrowing flows from the `type` discriminant.

Inline literals + missing fields fixed at call sites:
- All `chunk('X', ...)` → `chunk(EventType.X, ...)` across
  stream-processor.test.ts (42), strip-to-spec-middleware.test.ts (4),
  chat.test.ts (1).
- All `type: 'X'` inside test object literals → `type: EventType.X`
  across stream-to-response, custom-events-integration, extend-adapter,
  stream-processor (the four MESSAGES_SNAPSHOT inline literals).
- extend-adapter mock RUN_FINISHED gained `threadId`.
- custom-events-integration TOOL_CALL_START gained `toolCallName`
  (the cast had been hiding the missing required field).
- stream-processor MESSAGES_SNAPSHOT bodies (the two whose casts were
  removed) converted from TanStack `UIMessage` shape (parts/createdAt)
  to AG-UI `Message` shape (id/role/content) — the processor casts
  internally, but the upstream MessagesSnapshotEvent.messages field
  requires AG-UI Message.

types.ts is untouched.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
tombeckenham added a commit that referenced this pull request May 14, 2026
chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.
@tombeckenham tombeckenham merged commit 0f17a38 into main May 14, 2026
10 checks passed
@tombeckenham tombeckenham deleted the 543-migrate-ai-groq-ai-openrouter-ai-ollama-to-openai-base-+-parameterize-the-base-for-sdk-shape-variance branch May 14, 2026 02:21
@github-actions github-actions Bot mentioned this pull request May 14, 2026
tombeckenham added a commit that referenced this pull request May 14, 2026
chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.
AlemTuzlak added a commit that referenced this pull request May 14, 2026
…+ summarize fix (#527)

* feat(ai): streaming structured output (chat outputSchema + stream:true)

Adds an optional `structuredOutputStream` method to the `TextAdapter` interface
plus the activity-layer wiring so `chat({ outputSchema, stream: true })` returns
a typed `StructuredOutputStream<T>`. The stream yields raw JSON deltas via the
existing TEXT_MESSAGE_* lifecycle and terminates with a CUSTOM
`structured-output.complete` event whose `value` is `{ object, raw, reasoning? }`.

Adapters that don't implement `structuredOutputStream` natively fall back to
`fallbackStructuredOutputStream`, which wraps the non-streaming
`structuredOutput()` call so consumers see a consistent lifecycle on every
adapter. With tools, the activity layer runs the agent loop, drops its
RUN_STARTED/RUN_FINISHED, and lets the structured stream bracket the run.

`TextActivityResult` uses `[TStream] extends [true]` (not bare `TStream extends true`)
so the default `boolean` value of `TStream` does *not* match the streaming branch.
This fixes #526 where `chat({ outputSchema })` typed as a stream while the
runtime returned a Promise.

Native streaming structured output for each provider lands in follow-up
commits via a centralised lift into @tanstack/openai-base.

* feat(openai-base): centralised structuredOutputStream + isAbortError hook

Adds `structuredOutputStream` to both `OpenAICompatibleChatCompletionsTextAdapter`
and `OpenAICompatibleResponsesTextAdapter`. Chat Completions issues a single
request with `response_format: json_schema` + `stream: true`; Responses uses
`text.format: json_schema` + `stream: true`. Subclasses inherit the method —
reasoning lifecycle flows through the existing `extractReasoning` hook (Chat
Completions) or Responses-API event-type discrimination (Responses), and the
final parsed JSON runs through the existing `transformStructuredOutput` hook.

Subclass changes:
- ai-groq: new `extractReasoning` override reading `delta.reasoning` /
  `delta.reasoning_content` so Groq reasoning models stream reasoning under
  the centralised path. (ai-groq's existing `processStreamChunks` override
  only fires on the chatStream path; the new structuredOutputStream
  independently captures usage from `chunk.x_groq?.usage` outside the
  `choices[0]` guard.)
- ai-grok: new `extractReasoning` override for xAI's `reasoning_content` /
  `reasoning` convention.
- ai-openrouter: new `isAbortError` override mapping `RequestAbortedError`
  from `@openrouter/sdk` to `RUN_ERROR { code: 'aborted' }`. Existing
  `extractReasoning` (`_reasoningText` on adapted chunks) and
  `transformStructuredOutput` (identity, preserves nulls) overrides apply
  to the new path unchanged.

Net deletion: ~1k LOC of per-adapter structuredOutputStream implementations
(landed in prior commit but never reached production) collapse into ~330 LOC
in the chat-completions base + ~340 LOC in the responses base.

* fix(openai-base): tighten structuredOutputStream conditionals for eslint

Drop dead `hasEmittedTextMessageEnd` flag (only set, never read), unwrap
unneeded `?.` on `chunk.choices[0]` (type already nullable), and remove
`?? 0` fallbacks on SDK-typed numeric usage fields.

* refactor: drop \`as unknown as\` from streaming structured-output paths

Replace \`as unknown as StreamChunk\` casts in fallbackStructuredOutputStream
and runStreamingStructuredOutputImpl with \`satisfies StreamChunk\` on
EventType-enum-tagged event literals (the AG-UI types tag \`.type\` with
\`EventType.*\` enum values, not string literals — so import \`EventType\`
and use it).

The custom-event narrow now uses the existing \`isStructuredOutputCompleteEvent\`
type guard instead of an inline shape check + cast, which lets the inner
\`value\` reference drop its \`as { object; raw; reasoning? }\` cast.

In openai-base, the request-cleanup destructures now operate on the SDK's
typed params directly (the OpenAI SDK types are well-formed enough to
spread without coercing to \`Record<string, unknown>\` first).

* fix(openai-base): align structuredOutputStream with #545 asChunk cleanup

chatStream path. The structuredOutputStream lift on this branch was emitting
those events without \`threadId\`; the new \`satisfies StreamChunk\` checks now
catch it. Plumb \`threadId\` through structuredOutputStream's aguiState in
both bases.

Also drop the residual \`asChunk()\` wrappers in my structuredOutputStream
yields and use \`type: EventType.X, ... } satisfies StreamChunk\` directly,
matching #545's new convention.

While we're here: the chat-completions \`processStreamChunks\` finalisation
forwards the SDK's \`finish_reason\` directly into \`RUN_FINISHED.finishReason\`,
but the SDK type still includes the legacy \`function_call\` value that AG-UI
doesn't accept. #545's \`satisfies\` cleanup exposed the mismatch — collapse
\`function_call\` to \`stop\` alongside the existing orphan \`tool_calls\` collapse.

* ci: apply automated fixes

* fix: align structured streaming with 543 openai-base + port to ai-openrouter

After rebasing onto #543 (openai-base adopts the openai SDK directly and
decouples ai-openrouter), wire the structured-output stream to the SDK
client and re-implement it inside ai-openrouter:

- openai-base: call `this.client.chat.completions.create` /
  `this.client.responses.create` directly instead of the removed
  `callChatCompletion*` / `callResponse*` abstract hooks; drop the
  defensive cast on `response.completed` now that `chunk.response`
  narrows via the SDK's `ResponseStreamEvent` union.
- ai-openrouter: add `structuredOutputStream` mirroring the openai-base
  implementation, adapted to OpenRouter's camelCase wire shape
  (`responseFormat` / `streamOptions: { includeUsage: true }`) and SDK
  call surface (`orClient.chat.send({ chatRequest })`). Maps both DOM
  `AbortError` and SDK `RequestAbortedError` to `RUN_ERROR { code: 'aborted' }`.
- ai-grok / ai-groq: switch to the canonical
  `OpenAI.Chat.Completions.ChatCompletionChunk` namespace form (ai-grok
  was importing a non-existent re-export from `@tanstack/openai-base`).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(ai-openrouter): structuredOutputStream for Responses (beta) adapter

Adds streaming structured output to `OpenRouterResponsesTextAdapter` for
parity with the chat-completions variant and the openai-base Responses
adapter. Single call to `beta.responses.send` with
`text.format: { type: 'json_schema', strict: true }` + `stream: true`;
events flow through the existing `normalizeStreamEvent` so the canonical
shape matches `processStreamChunks` (including the Speakeasy
UNKNOWN-with-`raw` fallback for events that fail strict per-variant
validation upstream).

Adaptations vs the openai-base port: camelCase usage shape
(`inputTokens`/`outputTokens`/`totalTokens`) on `response.completed`,
both `response.failed` and `response.incomplete` treated as terminal
RUN_ERROR (matching `processStreamChunks`), SSE-level `error` event also
surfaced as RUN_ERROR, and inline abort detection for
`RequestAbortedError` / `AbortError` → `code: 'aborted'`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: openai Chat Completions adapter + summarize streaming fix + example wiring

Packages
- ai-openai: add openaiChatCompletions / OpenAIChatCompletionsTextAdapter
  sibling to the existing Responses adapter. Thin subclass of
  OpenAIBaseChatCompletionsTextAdapter so callers can pick the older
  /v1/chat/completions wire format against the OpenAI SDK.
- ai: ChatStreamSummarizeAdapter.summarizeStream now accumulates summary
  text and emits a terminal CUSTOM { name: 'generation:result' } event
  before passing RUN_FINISHED through. Fixes useSummarize never populating
  result in connection/server-fn streaming modes — GenerationClient only
  sets result on that specific CUSTOM event.

ts-react-chat example
- Structured Output menu: drop the misleading '(OpenRouter)' suffix from
  the sidebar entry; relabel the OpenAI option as 'OpenAI (Responses)';
  add 'OpenAI (Chat Completions)' and 'OpenRouter (Responses beta)' so
  the page exposes all four wire-format combinations end-to-end.
- Summarize page: add a model picker (gpt-4o-mini through gpt-5.2) wired
  through to the API route and both server-fns. Drop the hard-coded
  maxLength: 200 which on Responses-API reasoning models gets the whole
  max_output_tokens budget consumed by hidden reasoning; the style
  instruction in the prompt already drives length. Live-render
  TEXT_MESSAGE_CONTENT deltas via onChunk so streaming mode is visibly
  streaming rather than appearing identical to direct.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ci: apply automated fixes

* refactor(ai-openrouter): drop casts and `satisfies StreamChunk` from structured-output streams

Address PR #527 review feedback from @AlemTuzlak (3 comments on
responses-text.ts, 1 on text.ts, 2 on activities/chat/index.ts):

- ai-openrouter/responses-text.ts (`structuredOutputStream`):
  - Drop `(await this.orClient.beta.responses.send(...)) as AsyncIterable<StreamEvents>` —
    `EventStream<T>` already extends `AsyncIterable<T>`.
  - Drop `as ResponsesRequest['text']` on the inner `text` object — the SDK's
    request type accepts the literal shape directly.
  - Drop inline `(chunk as { ... }).delta` / `(chunk.response ?? {}) as {...}` casts.
    `NormalizedStreamEvent` already types `delta` and `response`; the existing
    `processStreamChunks` reads the same fields without casts.
  - Drop redundant `satisfies StreamChunk` (20×). The `AsyncIterable<StreamChunk>` /
    `Generator<StreamChunk>` return types already validate every yield site via
    contextual typing.

- ai-openrouter/text.ts (`structuredOutputStream`):
  - Drop `(await this.orClient.chat.send(...)) as AsyncIterable<ChatStreamChunk>`.
  - Drop redundant `satisfies StreamChunk` (17×).

- ai/activities/chat/index.ts:
  - Replace `{ chatOptions: TextOptions<any, any>; outputSchema: any }` parameter
    on `fallbackStructuredOutputStream` with `StructuredOutputOptions<Record<string,
    unknown>>` — the adapter-side type already exists.
  - Drop `(adapter as { provider?: string }).provider ?? adapter.name` in the
    structured-stream logger. `provider` is not a `TextAdapter` field; `adapter.name`
    is the canonical provider identifier.
  - Drop redundant `satisfies StreamChunk` / `satisfies StructuredOutputCompleteEvent`
    (8×) in `fallbackStructuredOutputStream` and `runStreamingStructuredOutputImpl`.

- ai/tests/chat-result-types.test.ts (new):
  - Add type-only regression test for `TextActivityResult`. Pins each
    `(outputSchema?, stream?)` combination so #526's streaming-structured-output
    branch can't silently regress to a Promise (or vice versa).

* Removed satisfies StreamChunk

* refactor(ai): drop `as unknown as` casts in chat() dispatch

Use narrowed locals (`outputSchema`, `stream`) and explicit `outputSchema: undefined` overrides instead of double-casting `options` through `unknown`. The trailing `as TextActivityResult<TSchema, TStream>` stays — TS narrows value types from runtime guards but not generic type parameters, so the conditional return type can't be reduced from inside a branch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ci: apply automated fixes

* docs(ai): document streaming structured output in skill + chat docs

Cover chat({ outputSchema, stream: true }) in docs/chat/structured-outputs.md
and the ai-core/structured-outputs skill: StructuredOutputStream<T> return
type, isStructuredOutputCompleteEvent example, structured-output.complete
event shape, per-adapter coverage (native vs. fallback), and a HIGH common
mistake against parsing partial JSON deltas. Adds a cross-ref from the skill
to ai-core/chat-experience.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ci: apply automated fixes

* feat(ai): tag custom events in StructuredOutputStream + debug-log chunks in structuredOutputStream

Public `StructuredOutputStream<T>` is now a discriminated union over three
tagged CUSTOM variants: `structured-output.complete<T>`, `approval-requested`,
and `tool-input-available`. Each has a literal `name` and typed `value`, so
`chunk.type === 'CUSTOM' && chunk.name === '<literal>'` narrows directly to
the exact shape — no `isStructuredOutputCompleteEvent` helper or cast needed.
The bare CustomEvent is excluded from the union (its `value: any` would
collapse the narrow to `any`); user-emitted events via the `emitCustomEvent`
API still flow at runtime as a documented residual gap.

New exports from @tanstack/ai: `ApprovalRequestedEvent`,
`ToolInputAvailableEvent`. The `isStructuredOutputCompleteEvent` helper is
removed (this overload is new in this PR — no shipped consumers).

Per-chunk `logger.provider(...)` debug logging added inside
`structuredOutputStream` for the four affected adapters (openai-base
chat-completions + responses, ai-openrouter text + responses-text), matching
the existing pattern in `chatStream` for end-to-end introspection in debug
mode. ai-openrouter uses `finishReason` (camelCase) consistent with the SDK
and the sibling chatStream logger; openai-base uses `finish_reason` per the
openai SDK shape.

Docs (`docs/chat/structured-outputs.md`) and the AI-core
`structured-outputs` SKILL.md updated to use the direct discriminated
narrow.

* chore: consolidate streaming-structured-output changesets into one

Merge the 10 changesets covering this PR (streaming structured output
across chat/grok/groq/openai/openai-base/openrouter, the openrouter
decoupling + narrowing, and the summarize subsystem unification) into a
single `.changeset/streaming-structured-output.md` with the union of
version bumps. The body retains every meaningful section from the
originals (core, openai-base, provider adapters, openrouter decoupling,
summarize) and adds the tagged-CustomEvent type design from the previous
commit.

* chore: scaffold .agent/self-learning pile with build-before-examples lesson

Initial scaffold of `.agent/self-learning/` for the self-improve plugin
(INDEX.md, config.yml, curation-state.yml, coupling.json, .gitignore,
`lessons/promoted/`). Captures the first repo-scoped lesson:
`2026-05-14-build-before-running-examples.md` — run
`pnpm -w run build:all` before starting any example dev server so the
workspace packages have `dist/` outputs vite can resolve.

* ci: apply automated fixes

* docs: streaming structured output with tools + OpenAI Chat Completions adapter

docs/chat/structured-outputs.md
  Add "Streaming with tools that may pause" subsection covering the
  approval-requested / tool-input-available tagged variants the agent
  loop can emit before structured-output.complete. Code example shows
  the narrowing pattern for all three CUSTOM variants. Cross-links the
  Tool Approval Flow and Client Tools pages.

docs/adapters/openai.md
  Add "Chat Completions API" section after Basic Usage covering the new
  openaiChatCompletions / createOpenaiChatCompletions factories — when
  to pick Chat Completions vs. Responses (reasoning-summary streaming,
  wire-format compatibility), code example, and a link to the Structured
  Outputs page for the streaming case. API Reference at the bottom now
  includes both factories.

* docs(chat/structured-outputs): lead with client+server flow, demote manual iteration to advanced

The previous streaming section opened with \`for await (const chunk of stream)\`
— that's the advanced/server-side-only path. The typical use case is a UI
streaming JSON deltas through SSE from a server endpoint, and the docs should
lead with it.

- New "Server endpoint" subsection: \`chat({outputSchema, stream: true})\` +
  \`toServerSentEventsResponse(stream)\`. One short example, no ceremony.
- New "Client with useChat" subsection: \`useChat\` + \`fetchServerSentEvents\`
  + \`onChunk\`, with \`parsePartialJSON\` driving progressive UI. Shows where
  the validated object lives (the terminal \`structured-output.complete\` event,
  typed as \`T\` via the schema). Notes Vue/Solid/Svelte share the shape.
- "What the stream contains" + "Adapter coverage" tables retained verbatim.
- Old standalone \`for await\` example moved to a new "Advanced: iterating the
  stream directly" subsection at the end, framed as the path for Node scripts,
  CLIs, server-only flows, and tests.
- "Streaming with tools that may pause" reframed to use the \`onChunk\` signature
  (matching the new primary path); a note points back to the advanced section
  for callers iterating the stream directly.

* feat(ai-react): useChat managed partial/final for structured-output streaming

Pass the same schema you give chat() on the server to useChat() on the
client, and the hook tracks the progressive object and the validated
terminal payload for you — no external useState, no onChunk ceremony, no
parsePartialJSON calls in user code.

API:

  const { sendMessage, isLoading, partial, final } = useChat({
    connection: fetchServerSentEvents("/api/extract"),
    outputSchema: PersonSchema,
  })

  // partial: DeepPartial<Person>           — updates per TEXT_MESSAGE_CONTENT delta
  // final:   Person | null                  — snaps on structured-output.complete

Implementation:

- New generic param TSchema extends SchemaInput | undefined = undefined on
  UseChatOptions / UseChatReturn / useChat.
- UseChatReturn is conditional on TSchema: when supplied, adds typed
  partial/final; when undefined (default), return is unchanged. Inferred
  automatically from outputSchema option.
- Internal onChunk handler tracks raw JSON buffer via ref, runs
  parsePartialJSON on each TEXT_MESSAGE_CONTENT delta, snaps final on the
  terminal CUSTOM structured-output.complete event, resets all three on
  RUN_STARTED. User's own onChunk callback still fires after internal
  processing — both compose.
- DeepPartial<T> exported for handlers that need to annotate.

The schema is used purely for client-side type inference; server-side
validation still runs against the schema passed to chat({ outputSchema })
on the server route. Works identically for non-streaming endpoints — for
those, partial stays {} and final populates when the single terminal
event arrives.

Type-level tests (tests/use-chat-types.test.ts) pin both branches of the
discriminated return type — useChat() without outputSchema rejects access
to partial/final via @ts-expect-error, useChat() with outputSchema asserts
typed DeepPartial<Person> / Person | null.

* ci: apply automated fixes

* feat(ai-vue, ai-solid, ai-svelte): mirror useChat outputSchema/partial/final

Apply the same schema-driven structured-output API that landed in
@tanstack/ai-react to the other three framework hooks. Same options shape
(`outputSchema?: TSchema`), same discriminated return type, identical
runtime behavior — only the reactivity primitive differs per framework.

Reactivity primitives:

  Vue    — `Readonly<ShallowRef<DeepPartial<T>>>` / `Readonly<ShallowRef<T | null>>`
  Solid  — `Accessor<DeepPartial<T>>` / `Accessor<T | null>`
  Svelte — `readonly partial: DeepPartial<T>` / `readonly final: T | null`
           (rune-backed getters)

Each hook is now generic on `TSchema extends SchemaInput | undefined`,
inferred from the `outputSchema` option. When omitted (default), the
return type is byte-identical to before; when supplied, `partial`/`final`
are added via a conditional `UseChatReturn<TTools, TSchema>` /
`CreateChatReturn<TTools, TSchema>`. The internal onChunk handler is the
same in all four — RUN_STARTED resets, TEXT_MESSAGE_CONTENT accumulates +
parses, CUSTOM structured-output.complete snaps final. User onChunk is
still invoked after the internal pass.

DeepPartial<T> is exported from each framework package.

Type-level tests in each package pin both branches of the discriminated
return type, mirroring the React variant — pure types, no renderer
required. Existing test suites pass on all three packages:

  ai-vue:    93 tests pass
  ai-solid:  103 tests pass
  ai-svelte: 56 tests pass

* docs: structured-outputs cross-framework + rendering reasoning/tool-calls

- structured-outputs.md "Client with useChat" section: add a "Rendering
  reasoning and tool calls" subsection explaining that those land on
  messages[…].parts (ThinkingPart, ToolCallPart, ToolResultPart) just
  like normal chat — no separate hook fields. Includes a render snippet
  showing how to hide the raw-JSON TextPart and let the structured view
  (partial/final) replace it.
- Note that useChat (React/Vue/Solid) and createChat (Svelte) all accept
  the same outputSchema option with the same semantics — only the
  reactivity primitive differs.
- Changeset: bump @tanstack/ai-vue, @tanstack/ai-solid, @tanstack/ai-svelte
  to minor alongside @tanstack/ai-react. Replaced the "React" section with
  a unified "Framework hooks" section covering all four packages and
  documenting the per-framework reactivity types.

* ci: apply automated fixes

* docs(structured-outputs): fix 'with tools that may pause' to use real APIs

The previous draft of the streaming-with-tools-that-may-pause subsection
invented showApprovalPrompt / runClientTool / resumeWithToolResult
helpers. The actual flow uses the standard chat APIs, identical to a
non-structured chat:

- Server tools with needsApproval:true land on messages[...].parts as
  ToolCallPart with state === 'approval-requested'. Render approval UI
  from messages, respond via addToolApprovalResponse({ id, approved })
  from the hook return (see docs/tools/tool-approval).
- Client tools with execute() set run automatically via the
  ChatClient's onToolCall handler (chat-client.ts:198-233). For manual
  handling, use addToolResult({ toolCallId, tool, output, state }) —
  see docs/tools/client-tools.

Replaced the made-up code with a real example showing an approval-
gated tool inside a structured-output run, using addToolApprovalResponse
and rendering the prompt from messages.parts. The structured stream
layers on top of standard chat — no special pause-handling logic.

* test: cover useChat({outputSchema}) runtime + runStreamingStructuredOutput orchestrator

Two runtime test files closing the highest-value gaps in the PR's
test coverage:

packages/typescript/ai-react/tests/use-chat-structured-output.test.ts (4 tests)
  - partial updates progressively from TEXT_MESSAGE_CONTENT deltas, final
    snaps on the terminal CUSTOM structured-output.complete event
  - state resets between runs via the stateful mock adapter (RUN_STARTED
    clears partial/final before the second run's deltas land)
  - user-supplied onChunk callback fires after internal tracking, with
    full visibility of the same chunks
  - useChat() without outputSchema doesn't track structured state — the
    internal handler's outputSchema-gate is a no-op

packages/typescript/ai/tests/chat-structured-output-stream.test.ts (6 tests)
  - native adapter.structuredOutputStream path: validated structured-
    output.complete event forwarded with parsed object, schema validation
    failure → RUN_ERROR { code: 'schema-validation' } and NO complete
    event is emitted, reasoning carries through validation onto the
    terminal event, TEXT_MESSAGE_CONTENT deltas pass through
  - fallbackStructuredOutputStream path (adapter lacks native streaming):
    synthesizes RUN_STARTED → TEXT_MESSAGE_* → structured-output.complete
    → RUN_FINISHED around the non-streaming structuredOutput call;
    schema validation failure on the fallback path also emits RUN_ERROR

Together: ai package 769 tests, ai-react 110, ai-vue 93, ai-solid 103,
ai-svelte 56 — all green.

* ci: apply automated fixes

* test(openai-base): cover structuredOutputStream on both base adapters

The server-side adapter implementations of structuredOutputStream (shared
by ai-openai, ai-grok, ai-groq via inheritance) had zero unit coverage —
only the e2e suite exercised them. Two new focused test files close that
gap by stubbing the openai SDK client and verifying the AG-UI lifecycle,
request shape, error paths, and per-chunk debug logging.

tests/chat-completions-structured-output-stream.test.ts (6 tests)
  - happy path: RUN_STARTED → TEXT_MESSAGE_* → CUSTOM
    structured-output.complete (typed object + raw JSON) → RUN_FINISHED
  - request shape: stream: true + response_format: { type: 'json_schema',
    json_schema: { strict: true } }; tools are stripped
  - delta accumulation across multiple chunks produces exactly one
    structured-output.complete with the fully-parsed object
  - empty content → RUN_ERROR { code: 'empty-response' }, no
    structured-output.complete is emitted
  - malformed JSON → RUN_ERROR { code: 'parse-error' }
  - per-chunk logger.provider is called once per SDK chunk (verified via
    a spy logger threaded through resolveDebugOption)

tests/responses-structured-output-stream.test.ts (7 tests)
  - same matrix against the Responses API event shape
    (response.created / response.output_text.delta / response.completed)
  - request shape: stream: true + text.format: { type: 'json_schema',
    strict: true }; tools stripped
  - usage promoted from response.completed onto RUN_FINISHED
  - empty content / parse-error → RUN_ERROR with the correct code
  - response.refusal.delta → RUN_ERROR { code: 'refusal' } (Responses-
    only failure surface)
  - per-chunk logger.provider invocation

Stub adapters extend the base directly and pass a fake OpenAI client
whose chat.completions.create / responses.create routes into a per-test
mock — same pattern as the existing chat-completions-text.test.ts and
responses-text.test.ts suites.

openai-base test count: 70 → 83 (all passing). Types + lint clean.

* ci: apply automated fixes

* fix(ci): list @standard-schema/spec as devDep on framework packages

Two failures from the previous push, both stemming from the type-test
files I added: knip flagged @standard-schema/spec as an unlisted import
across ai-react, ai-vue, ai-solid, and ai-svelte test files, and
@tanstack/ai-vue:test:types failed because — unlike the other three —
its tsconfig included tests/, so tsc strictly resolved the import (which
isn't a direct dep, only transitively via @tanstack/ai).

Fixes:

- Add \`@standard-schema/spec: ^1.1.0\` to devDependencies on all four
  framework packages. The import is purely for type-level construction
  in the type tests (StandardJSONSchemaV1<Person, Person> — a phantom
  branded type that simulates what a Zod schema's inferred type would
  look like). devDep is the right scope.
- Align ai-vue's tsconfig with ai-react/ai-solid/ai-svelte by dropping
  tests/ from the tsc include block. Tests are still type-checked by
  vitest at runtime; tsc now only checks src/.

Verified locally: pnpm test:knip, pnpm test:sherif, and test:types on
all four framework packages pass.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Migrate ai-groq, ai-openrouter, ai-ollama to openai-base + parameterize the base for SDK shape variance

2 participants