Skip to content

ci: Version Packages#554

Open
github-actions[bot] wants to merge 1 commit into
mainfrom
changeset-release/main
Open

ci: Version Packages#554
github-actions[bot] wants to merge 1 commit into
mainfrom
changeset-release/main

Conversation

@github-actions
Copy link
Copy Markdown
Contributor

This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.

Releases

@tanstack/openai-base@0.3.0

Minor Changes

  • Decouple @tanstack/ai-openrouter from the shared OpenAI base, and collapse the base into a thinner shim over the openai SDK. (#545)

    Three changes that ship together:

    1. Rename @tanstack/ai-openai-compatible@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai, ai-grok, ai-groq) all back onto the openai SDK with a different baseURL — "base" describes that role accurately. Imports change:

    - import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'
    - import { OpenAICompatibleResponsesTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseResponsesTextAdapter } from '@tanstack/openai-base'

    @tanstack/ai-openai-compatible@0.2.x remains published for anyone with a pinned lockfile reference but will receive no further updates.

    2. @tanstack/openai-base adopts the openai SDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion, ResponseStreamEvent, etc.) and exposed abstract callChatCompletion* / callResponse* hooks subclasses had to implement. Both are gone:

    • The base now depends on openai again and imports types directly from openai/resources/.... The vendored src/types/ directory is removed; consumers that imported wire types from the package (e.g. import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.
    • The abstract SDK-call methods are removed. The base constructor takes a pre-built OpenAI client (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and calls client.chat.completions.create / client.responses.create itself. Subclasses (ai-openai, ai-grok, ai-groq) now just construct the SDK with their provider-specific baseURL and pass it to supercallChatCompletion* / callResponse* overrides go away.

    The other extension hooks (extractReasoning, extractTextFromResponse, processStreamChunks, makeStructuredOutputCompatible, transformStructuredOutput, mapOptionsToRequest, convertMessage) remain. Groq's processStreamChunks and makeStructuredOutputCompatible overrides (for x_groq.usage promotion and Groq's structured-output schema quirks) are unchanged.

    3. Decouple @tanstack/ai-openrouter from the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extends BaseTextAdapter directly and inlines its own stream processors (OpenRouterTextAdapter for chat-completions, OpenRouterResponsesTextAdapter for the Responses beta), reading OpenRouter's camelCase types natively. The @tanstack/openai-base and openai dependencies are removed from ai-openrouter; only @openrouter/sdk, @tanstack/ai, and @tanstack/ai-utils remain.

    Public API is unchanged: openRouterText, openRouterResponsesText, createOpenRouterText, createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider, models, plugins, variant, transforms), app attribution headers (httpReferer, appTitle), :variant model suffixing, RequestAbortedError propagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest, toChatCompletion, adaptOpenRouterStreamChunks, toSnakeResponseResult, …) are gone.

    ai-ollama remains on BaseTextAdapter directly — its native API uses a different wire format from Chat Completions and was never on the shared base.

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1

Renamed from @tanstack/openai-base in 0.3.0. See the README for context.

@tanstack/ai@0.16.1

Patch Changes

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies []:

    • @tanstack/ai-event-client@0.3.1

@tanstack/ai-anthropic@0.8.7

Patch Changes

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies [0f17a38]:

    • @tanstack/ai@0.16.1

@tanstack/ai-client@0.9.2

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-event-client@0.3.1

@tanstack/ai-code-mode@0.1.11

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1

@tanstack/ai-code-mode-skills@0.1.11

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-code-mode@0.1.11

@tanstack/ai-devtools-core@0.3.28

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-event-client@0.3.1

@tanstack/ai-event-client@0.3.1

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1

@tanstack/ai-fal@0.7.4

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1

@tanstack/ai-gemini@0.10.4

Patch Changes

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies [0f17a38]:

    • @tanstack/ai@0.16.1

@tanstack/ai-grok@0.7.4

Patch Changes

  • Decouple @tanstack/ai-openrouter from the shared OpenAI base, and collapse the base into a thinner shim over the openai SDK. (#545)

    Three changes that ship together:

    1. Rename @tanstack/ai-openai-compatible@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai, ai-grok, ai-groq) all back onto the openai SDK with a different baseURL — "base" describes that role accurately. Imports change:

    - import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'
    - import { OpenAICompatibleResponsesTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseResponsesTextAdapter } from '@tanstack/openai-base'

    @tanstack/ai-openai-compatible@0.2.x remains published for anyone with a pinned lockfile reference but will receive no further updates.

    2. @tanstack/openai-base adopts the openai SDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion, ResponseStreamEvent, etc.) and exposed abstract callChatCompletion* / callResponse* hooks subclasses had to implement. Both are gone:

    • The base now depends on openai again and imports types directly from openai/resources/.... The vendored src/types/ directory is removed; consumers that imported wire types from the package (e.g. import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.
    • The abstract SDK-call methods are removed. The base constructor takes a pre-built OpenAI client (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and calls client.chat.completions.create / client.responses.create itself. Subclasses (ai-openai, ai-grok, ai-groq) now just construct the SDK with their provider-specific baseURL and pass it to supercallChatCompletion* / callResponse* overrides go away.

    The other extension hooks (extractReasoning, extractTextFromResponse, processStreamChunks, makeStructuredOutputCompatible, transformStructuredOutput, mapOptionsToRequest, convertMessage) remain. Groq's processStreamChunks and makeStructuredOutputCompatible overrides (for x_groq.usage promotion and Groq's structured-output schema quirks) are unchanged.

    3. Decouple @tanstack/ai-openrouter from the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extends BaseTextAdapter directly and inlines its own stream processors (OpenRouterTextAdapter for chat-completions, OpenRouterResponsesTextAdapter for the Responses beta), reading OpenRouter's camelCase types natively. The @tanstack/openai-base and openai dependencies are removed from ai-openrouter; only @openrouter/sdk, @tanstack/ai, and @tanstack/ai-utils remain.

    Public API is unchanged: openRouterText, openRouterResponsesText, createOpenRouterText, createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider, models, plugins, variant, transforms), app attribution headers (httpReferer, appTitle), :variant model suffixing, RequestAbortedError propagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest, toChatCompletion, adaptOpenRouterStreamChunks, toSnakeResponseResult, …) are gone.

    ai-ollama remains on BaseTextAdapter directly — its native API uses a different wire format from Chat Completions and was never on the shared base.

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies [0f17a38, 0f17a38]:

    • @tanstack/openai-base@0.3.0
    • @tanstack/ai@0.16.1

@tanstack/ai-groq@0.1.12

Patch Changes

  • Decouple @tanstack/ai-openrouter from the shared OpenAI base, and collapse the base into a thinner shim over the openai SDK. (#545)

    Three changes that ship together:

    1. Rename @tanstack/ai-openai-compatible@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai, ai-grok, ai-groq) all back onto the openai SDK with a different baseURL — "base" describes that role accurately. Imports change:

    - import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'
    - import { OpenAICompatibleResponsesTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseResponsesTextAdapter } from '@tanstack/openai-base'

    @tanstack/ai-openai-compatible@0.2.x remains published for anyone with a pinned lockfile reference but will receive no further updates.

    2. @tanstack/openai-base adopts the openai SDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion, ResponseStreamEvent, etc.) and exposed abstract callChatCompletion* / callResponse* hooks subclasses had to implement. Both are gone:

    • The base now depends on openai again and imports types directly from openai/resources/.... The vendored src/types/ directory is removed; consumers that imported wire types from the package (e.g. import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.
    • The abstract SDK-call methods are removed. The base constructor takes a pre-built OpenAI client (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and calls client.chat.completions.create / client.responses.create itself. Subclasses (ai-openai, ai-grok, ai-groq) now just construct the SDK with their provider-specific baseURL and pass it to supercallChatCompletion* / callResponse* overrides go away.

    The other extension hooks (extractReasoning, extractTextFromResponse, processStreamChunks, makeStructuredOutputCompatible, transformStructuredOutput, mapOptionsToRequest, convertMessage) remain. Groq's processStreamChunks and makeStructuredOutputCompatible overrides (for x_groq.usage promotion and Groq's structured-output schema quirks) are unchanged.

    3. Decouple @tanstack/ai-openrouter from the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extends BaseTextAdapter directly and inlines its own stream processors (OpenRouterTextAdapter for chat-completions, OpenRouterResponsesTextAdapter for the Responses beta), reading OpenRouter's camelCase types natively. The @tanstack/openai-base and openai dependencies are removed from ai-openrouter; only @openrouter/sdk, @tanstack/ai, and @tanstack/ai-utils remain.

    Public API is unchanged: openRouterText, openRouterResponsesText, createOpenRouterText, createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider, models, plugins, variant, transforms), app attribution headers (httpReferer, appTitle), :variant model suffixing, RequestAbortedError propagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest, toChatCompletion, adaptOpenRouterStreamChunks, toSnakeResponseResult, …) are gone.

    ai-ollama remains on BaseTextAdapter directly — its native API uses a different wire format from Chat Completions and was never on the shared base.

  • Updated dependencies [0f17a38, 0f17a38]:

    • @tanstack/openai-base@0.3.0
    • @tanstack/ai@0.16.1

@tanstack/ai-isolate-cloudflare@0.2.2

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-code-mode@0.1.11

@tanstack/ai-isolate-node@0.1.11

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-code-mode@0.1.11

@tanstack/ai-isolate-quickjs@0.1.11

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-code-mode@0.1.11

@tanstack/ai-ollama@0.6.14

Patch Changes

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies [0f17a38]:

    • @tanstack/ai@0.16.1

@tanstack/ai-openai@0.8.6

Patch Changes

  • Decouple @tanstack/ai-openrouter from the shared OpenAI base, and collapse the base into a thinner shim over the openai SDK. (#545)

    Three changes that ship together:

    1. Rename @tanstack/ai-openai-compatible@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai, ai-grok, ai-groq) all back onto the openai SDK with a different baseURL — "base" describes that role accurately. Imports change:

    - import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'
    - import { OpenAICompatibleResponsesTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseResponsesTextAdapter } from '@tanstack/openai-base'

    @tanstack/ai-openai-compatible@0.2.x remains published for anyone with a pinned lockfile reference but will receive no further updates.

    2. @tanstack/openai-base adopts the openai SDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion, ResponseStreamEvent, etc.) and exposed abstract callChatCompletion* / callResponse* hooks subclasses had to implement. Both are gone:

    • The base now depends on openai again and imports types directly from openai/resources/.... The vendored src/types/ directory is removed; consumers that imported wire types from the package (e.g. import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.
    • The abstract SDK-call methods are removed. The base constructor takes a pre-built OpenAI client (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and calls client.chat.completions.create / client.responses.create itself. Subclasses (ai-openai, ai-grok, ai-groq) now just construct the SDK with their provider-specific baseURL and pass it to supercallChatCompletion* / callResponse* overrides go away.

    The other extension hooks (extractReasoning, extractTextFromResponse, processStreamChunks, makeStructuredOutputCompatible, transformStructuredOutput, mapOptionsToRequest, convertMessage) remain. Groq's processStreamChunks and makeStructuredOutputCompatible overrides (for x_groq.usage promotion and Groq's structured-output schema quirks) are unchanged.

    3. Decouple @tanstack/ai-openrouter from the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extends BaseTextAdapter directly and inlines its own stream processors (OpenRouterTextAdapter for chat-completions, OpenRouterResponsesTextAdapter for the Responses beta), reading OpenRouter's camelCase types natively. The @tanstack/openai-base and openai dependencies are removed from ai-openrouter; only @openrouter/sdk, @tanstack/ai, and @tanstack/ai-utils remain.

    Public API is unchanged: openRouterText, openRouterResponsesText, createOpenRouterText, createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider, models, plugins, variant, transforms), app attribution headers (httpReferer, appTitle), :variant model suffixing, RequestAbortedError propagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest, toChatCompletion, adaptOpenRouterStreamChunks, toSnakeResponseResult, …) are gone.

    ai-ollama remains on BaseTextAdapter directly — its native API uses a different wire format from Chat Completions and was never on the shared base.

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies [0f17a38, 0f17a38]:

    • @tanstack/openai-base@0.3.0
    • @tanstack/ai@0.16.1
    • @tanstack/ai-client@0.9.2

@tanstack/ai-openrouter@0.8.6

Patch Changes

  • Decouple @tanstack/ai-openrouter from the shared OpenAI base, and collapse the base into a thinner shim over the openai SDK. (#545)

    Three changes that ship together:

    1. Rename @tanstack/ai-openai-compatible@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai, ai-grok, ai-groq) all back onto the openai SDK with a different baseURL — "base" describes that role accurately. Imports change:

    - import { OpenAICompatibleChatCompletionsTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseChatCompletionsTextAdapter } from '@tanstack/openai-base'
    - import { OpenAICompatibleResponsesTextAdapter } from '@tanstack/ai-openai-compatible'
    + import { OpenAIBaseResponsesTextAdapter } from '@tanstack/openai-base'

    @tanstack/ai-openai-compatible@0.2.x remains published for anyone with a pinned lockfile reference but will receive no further updates.

    2. @tanstack/openai-base adopts the openai SDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion, ResponseStreamEvent, etc.) and exposed abstract callChatCompletion* / callResponse* hooks subclasses had to implement. Both are gone:

    • The base now depends on openai again and imports types directly from openai/resources/.... The vendored src/types/ directory is removed; consumers that imported wire types from the package (e.g. import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.
    • The abstract SDK-call methods are removed. The base constructor takes a pre-built OpenAI client (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and calls client.chat.completions.create / client.responses.create itself. Subclasses (ai-openai, ai-grok, ai-groq) now just construct the SDK with their provider-specific baseURL and pass it to supercallChatCompletion* / callResponse* overrides go away.

    The other extension hooks (extractReasoning, extractTextFromResponse, processStreamChunks, makeStructuredOutputCompatible, transformStructuredOutput, mapOptionsToRequest, convertMessage) remain. Groq's processStreamChunks and makeStructuredOutputCompatible overrides (for x_groq.usage promotion and Groq's structured-output schema quirks) are unchanged.

    3. Decouple @tanstack/ai-openrouter from the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extends BaseTextAdapter directly and inlines its own stream processors (OpenRouterTextAdapter for chat-completions, OpenRouterResponsesTextAdapter for the Responses beta), reading OpenRouter's camelCase types natively. The @tanstack/openai-base and openai dependencies are removed from ai-openrouter; only @openrouter/sdk, @tanstack/ai, and @tanstack/ai-utils remain.

    Public API is unchanged: openRouterText, openRouterResponsesText, createOpenRouterText, createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider, models, plugins, variant, transforms), app attribution headers (httpReferer, appTitle), :variant model suffixing, RequestAbortedError propagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest, toChatCompletion, adaptOpenRouterStreamChunks, toSnakeResponseResult, …) are gone.

    ai-ollama remains on BaseTextAdapter directly — its native API uses a different wire format from Chat Completions and was never on the shared base.

  • Internal: drop the remaining duck-typed as { ... } casts on stream chunks in OpenRouterResponsesTextAdapter. Five sites (response.created/in_progress/incomplete/failed model + error capture, response.content_part.added/done payload, and the response.completed function-call detection) now narrow via the SDK's discriminated unions directly. Behaviourally identical; reduces the chance of a SDK type rename silently slipping past us. (#545)

  • Unify the summarize subsystem on a shared chat-stream wrapper, plumb modelOptions through end-to-end, and tighten the TProviderOptions generic. (#545)

    Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a ChatStreamSummarizeAdapter (formerly ChatStreamWrapperAdapter, renamed and exported from @tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.

    SummarizationOptions.modelOptions now reaches the wire. Previously the activity layer (runSummarize / runStreamingSummarize) silently dropped modelOptions when building the internal SummarizationOptions it forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.

    Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke XSummarizeProviderOptions interface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's ~types via the new InferTextProviderOptions<TAdapter> helper exported from @tanstack/ai/activities. IntelliSense for modelOptions on summarize({ adapter: openai('gpt-4o'), … }) now matches what chat({ adapter: openai('gpt-4o'), … }) would show.

    SummarizeAdapter interface methods are now generic in TProviderOptions. summarize and summarizeStream previously took SummarizationOptions (defaulted, so modelOptions was effectively Record<string, any> regardless of the adapter's typed shape). They now take SummarizationOptions<TProviderOptions>, threading the class's TProviderOptions generic through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.

    Default aligned across the summarize surface. SummarizationOptions, SummarizeAdapter, BaseSummarizeAdapter, and ChatStreamSummarizeAdapter previously had a mixed Record<string, any> / Record<string, unknown> / object set of defaults for TProviderOptions. They now uniformly default to Record<string, unknown> so unparameterised consumers narrow before indexed access on modelOptions. The extends object constraint is unchanged — per-model typed interfaces (e.g. OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred via InferTextProviderOptions<TAdapter> continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).

    Bespoke *SummarizeProviderOptions interfaces (e.g. OpenAISummarizeProviderOptions, AnthropicSummarizeProviderOptions, GeminiSummarizeProviderOptions, OllamaSummarizeProviderOptions, OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).

  • Updated dependencies [0f17a38]:

    • @tanstack/ai@0.16.1

@tanstack/ai-preact@0.6.23

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-client@0.9.2

@tanstack/ai-react@0.8.3

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-client@0.9.2

@tanstack/ai-solid@0.7.3

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-client@0.9.2

@tanstack/ai-svelte@0.7.3

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-client@0.9.2

@tanstack/ai-vue@0.7.3

Patch Changes

  • Updated dependencies [0f17a38]:
    • @tanstack/ai@0.16.1
    • @tanstack/ai-client@0.9.2

@tanstack/ai-vue-ui@0.1.34

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-vue@0.7.3

@tanstack/preact-ai-devtools@0.1.32

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.28

@tanstack/react-ai-devtools@0.2.32

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.28

@tanstack/solid-ai-devtools@0.2.32

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.28

ts-svelte-chat@0.1.42

Patch Changes

  • Updated dependencies [0f17a38, 0f17a38]:
    • @tanstack/ai-openai@0.8.6
    • @tanstack/ai@0.16.1
    • @tanstack/ai-anthropic@0.8.7
    • @tanstack/ai-gemini@0.10.4
    • @tanstack/ai-ollama@0.6.14
    • @tanstack/ai-client@0.9.2
    • @tanstack/ai-svelte@0.7.3

ts-vue-chat@0.1.42

Patch Changes

  • Updated dependencies [0f17a38, 0f17a38]:
    • @tanstack/ai-openai@0.8.6
    • @tanstack/ai@0.16.1
    • @tanstack/ai-anthropic@0.8.7
    • @tanstack/ai-gemini@0.10.4
    • @tanstack/ai-ollama@0.6.14
    • @tanstack/ai-client@0.9.2
    • @tanstack/ai-vue@0.7.3
    • @tanstack/ai-vue-ui@0.1.34

vanilla-chat@0.0.38

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-client@0.9.2

@tanstack/ai-code-mode-models-eval@0.0.16

Patch Changes

  • Updated dependencies [0f17a38, 0f17a38]:
    • @tanstack/ai-openai@0.8.6
    • @tanstack/ai-grok@0.7.4
    • @tanstack/ai-groq@0.1.12
    • @tanstack/ai@0.16.1
    • @tanstack/ai-anthropic@0.8.7
    • @tanstack/ai-gemini@0.10.4
    • @tanstack/ai-ollama@0.6.14
    • @tanstack/ai-code-mode@0.1.11
    • @tanstack/ai-isolate-node@0.1.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants