ci: Version Packages#554
Open
github-actions[bot] wants to merge 1 commit into
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.
Releases
@tanstack/openai-base@0.3.0
Minor Changes
Decouple
@tanstack/ai-openrouterfrom the shared OpenAI base, and collapse the base into a thinner shim over theopenaiSDK. (#545)Three changes that ship together:
1. Rename
@tanstack/ai-openai-compatible→@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai,ai-grok,ai-groq) all back onto theopenaiSDK with a differentbaseURL— "base" describes that role accurately. Imports change:@tanstack/ai-openai-compatible@0.2.xremains published for anyone with a pinned lockfile reference but will receive no further updates.2.
@tanstack/openai-baseadopts theopenaiSDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion,ResponseStreamEvent, etc.) and exposed abstractcallChatCompletion*/callResponse*hooks subclasses had to implement. Both are gone:openaiagain and imports types directly fromopenai/resources/.... The vendoredsrc/types/directory is removed; consumers that imported wire types from the package (e.g.import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.OpenAIclient (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and callsclient.chat.completions.create/client.responses.createitself. Subclasses (ai-openai,ai-grok,ai-groq) now just construct the SDK with their provider-specificbaseURLand pass it tosuper—callChatCompletion*/callResponse*overrides go away.The other extension hooks (
extractReasoning,extractTextFromResponse,processStreamChunks,makeStructuredOutputCompatible,transformStructuredOutput,mapOptionsToRequest,convertMessage) remain. Groq'sprocessStreamChunksandmakeStructuredOutputCompatibleoverrides (forx_groq.usagepromotion and Groq's structured-output schema quirks) are unchanged.3. Decouple
@tanstack/ai-openrouterfrom the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extendsBaseTextAdapterdirectly and inlines its own stream processors (OpenRouterTextAdapterfor chat-completions,OpenRouterResponsesTextAdapterfor the Responses beta), reading OpenRouter's camelCase types natively. The@tanstack/openai-baseandopenaidependencies are removed from ai-openrouter; only@openrouter/sdk,@tanstack/ai, and@tanstack/ai-utilsremain.Public API is unchanged:
openRouterText,openRouterResponsesText,createOpenRouterText,createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider,models,plugins,variant,transforms), app attribution headers (httpReferer,appTitle),:variantmodel suffixing,RequestAbortedErrorpropagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest,toChatCompletion,adaptOpenRouterStreamChunks,toSnakeResponseResult, …) are gone.ai-ollamaremains onBaseTextAdapterdirectly — its native API uses a different wire format from Chat Completions and was never on the shared base.Patch Changes
0f17a38]:@tanstack/ai@0.16.1
Patch Changes
Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies []:
@tanstack/ai-anthropic@0.8.7
Patch Changes
Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies [
0f17a38]:@tanstack/ai-client@0.9.2
Patch Changes
0f17a38]:@tanstack/ai-code-mode@0.1.11
Patch Changes
0f17a38]:@tanstack/ai-code-mode-skills@0.1.11
Patch Changes
0f17a38]:@tanstack/ai-devtools-core@0.3.28
Patch Changes
0f17a38]:@tanstack/ai-event-client@0.3.1
Patch Changes
0f17a38]:@tanstack/ai-fal@0.7.4
Patch Changes
0f17a38]:@tanstack/ai-gemini@0.10.4
Patch Changes
Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies [
0f17a38]:@tanstack/ai-grok@0.7.4
Patch Changes
Decouple
@tanstack/ai-openrouterfrom the shared OpenAI base, and collapse the base into a thinner shim over theopenaiSDK. (#545)Three changes that ship together:
1. Rename
@tanstack/ai-openai-compatible→@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai,ai-grok,ai-groq) all back onto theopenaiSDK with a differentbaseURL— "base" describes that role accurately. Imports change:@tanstack/ai-openai-compatible@0.2.xremains published for anyone with a pinned lockfile reference but will receive no further updates.2.
@tanstack/openai-baseadopts theopenaiSDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion,ResponseStreamEvent, etc.) and exposed abstractcallChatCompletion*/callResponse*hooks subclasses had to implement. Both are gone:openaiagain and imports types directly fromopenai/resources/.... The vendoredsrc/types/directory is removed; consumers that imported wire types from the package (e.g.import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.OpenAIclient (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and callsclient.chat.completions.create/client.responses.createitself. Subclasses (ai-openai,ai-grok,ai-groq) now just construct the SDK with their provider-specificbaseURLand pass it tosuper—callChatCompletion*/callResponse*overrides go away.The other extension hooks (
extractReasoning,extractTextFromResponse,processStreamChunks,makeStructuredOutputCompatible,transformStructuredOutput,mapOptionsToRequest,convertMessage) remain. Groq'sprocessStreamChunksandmakeStructuredOutputCompatibleoverrides (forx_groq.usagepromotion and Groq's structured-output schema quirks) are unchanged.3. Decouple
@tanstack/ai-openrouterfrom the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extendsBaseTextAdapterdirectly and inlines its own stream processors (OpenRouterTextAdapterfor chat-completions,OpenRouterResponsesTextAdapterfor the Responses beta), reading OpenRouter's camelCase types natively. The@tanstack/openai-baseandopenaidependencies are removed from ai-openrouter; only@openrouter/sdk,@tanstack/ai, and@tanstack/ai-utilsremain.Public API is unchanged:
openRouterText,openRouterResponsesText,createOpenRouterText,createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider,models,plugins,variant,transforms), app attribution headers (httpReferer,appTitle),:variantmodel suffixing,RequestAbortedErrorpropagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest,toChatCompletion,adaptOpenRouterStreamChunks,toSnakeResponseResult, …) are gone.ai-ollamaremains onBaseTextAdapterdirectly — its native API uses a different wire format from Chat Completions and was never on the shared base.Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies [
0f17a38,0f17a38]:@tanstack/ai-groq@0.1.12
Patch Changes
Decouple
@tanstack/ai-openrouterfrom the shared OpenAI base, and collapse the base into a thinner shim over theopenaiSDK. (#545)Three changes that ship together:
1. Rename
@tanstack/ai-openai-compatible→@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai,ai-grok,ai-groq) all back onto theopenaiSDK with a differentbaseURL— "base" describes that role accurately. Imports change:@tanstack/ai-openai-compatible@0.2.xremains published for anyone with a pinned lockfile reference but will receive no further updates.2.
@tanstack/openai-baseadopts theopenaiSDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion,ResponseStreamEvent, etc.) and exposed abstractcallChatCompletion*/callResponse*hooks subclasses had to implement. Both are gone:openaiagain and imports types directly fromopenai/resources/.... The vendoredsrc/types/directory is removed; consumers that imported wire types from the package (e.g.import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.OpenAIclient (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and callsclient.chat.completions.create/client.responses.createitself. Subclasses (ai-openai,ai-grok,ai-groq) now just construct the SDK with their provider-specificbaseURLand pass it tosuper—callChatCompletion*/callResponse*overrides go away.The other extension hooks (
extractReasoning,extractTextFromResponse,processStreamChunks,makeStructuredOutputCompatible,transformStructuredOutput,mapOptionsToRequest,convertMessage) remain. Groq'sprocessStreamChunksandmakeStructuredOutputCompatibleoverrides (forx_groq.usagepromotion and Groq's structured-output schema quirks) are unchanged.3. Decouple
@tanstack/ai-openrouterfrom the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extendsBaseTextAdapterdirectly and inlines its own stream processors (OpenRouterTextAdapterfor chat-completions,OpenRouterResponsesTextAdapterfor the Responses beta), reading OpenRouter's camelCase types natively. The@tanstack/openai-baseandopenaidependencies are removed from ai-openrouter; only@openrouter/sdk,@tanstack/ai, and@tanstack/ai-utilsremain.Public API is unchanged:
openRouterText,openRouterResponsesText,createOpenRouterText,createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider,models,plugins,variant,transforms), app attribution headers (httpReferer,appTitle),:variantmodel suffixing,RequestAbortedErrorpropagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest,toChatCompletion,adaptOpenRouterStreamChunks,toSnakeResponseResult, …) are gone.ai-ollamaremains onBaseTextAdapterdirectly — its native API uses a different wire format from Chat Completions and was never on the shared base.Updated dependencies [
0f17a38,0f17a38]:@tanstack/ai-isolate-cloudflare@0.2.2
Patch Changes
@tanstack/ai-isolate-node@0.1.11
Patch Changes
@tanstack/ai-isolate-quickjs@0.1.11
Patch Changes
@tanstack/ai-ollama@0.6.14
Patch Changes
Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies [
0f17a38]:@tanstack/ai-openai@0.8.6
Patch Changes
Decouple
@tanstack/ai-openrouterfrom the shared OpenAI base, and collapse the base into a thinner shim over theopenaiSDK. (#545)Three changes that ship together:
1. Rename
@tanstack/ai-openai-compatible→@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai,ai-grok,ai-groq) all back onto theopenaiSDK with a differentbaseURL— "base" describes that role accurately. Imports change:@tanstack/ai-openai-compatible@0.2.xremains published for anyone with a pinned lockfile reference but will receive no further updates.2.
@tanstack/openai-baseadopts theopenaiSDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion,ResponseStreamEvent, etc.) and exposed abstractcallChatCompletion*/callResponse*hooks subclasses had to implement. Both are gone:openaiagain and imports types directly fromopenai/resources/.... The vendoredsrc/types/directory is removed; consumers that imported wire types from the package (e.g.import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.OpenAIclient (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and callsclient.chat.completions.create/client.responses.createitself. Subclasses (ai-openai,ai-grok,ai-groq) now just construct the SDK with their provider-specificbaseURLand pass it tosuper—callChatCompletion*/callResponse*overrides go away.The other extension hooks (
extractReasoning,extractTextFromResponse,processStreamChunks,makeStructuredOutputCompatible,transformStructuredOutput,mapOptionsToRequest,convertMessage) remain. Groq'sprocessStreamChunksandmakeStructuredOutputCompatibleoverrides (forx_groq.usagepromotion and Groq's structured-output schema quirks) are unchanged.3. Decouple
@tanstack/ai-openrouterfrom the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extendsBaseTextAdapterdirectly and inlines its own stream processors (OpenRouterTextAdapterfor chat-completions,OpenRouterResponsesTextAdapterfor the Responses beta), reading OpenRouter's camelCase types natively. The@tanstack/openai-baseandopenaidependencies are removed from ai-openrouter; only@openrouter/sdk,@tanstack/ai, and@tanstack/ai-utilsremain.Public API is unchanged:
openRouterText,openRouterResponsesText,createOpenRouterText,createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider,models,plugins,variant,transforms), app attribution headers (httpReferer,appTitle),:variantmodel suffixing,RequestAbortedErrorpropagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest,toChatCompletion,adaptOpenRouterStreamChunks,toSnakeResponseResult, …) are gone.ai-ollamaremains onBaseTextAdapterdirectly — its native API uses a different wire format from Chat Completions and was never on the shared base.Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies [
0f17a38,0f17a38]:@tanstack/ai-openrouter@0.8.6
Patch Changes
Decouple
@tanstack/ai-openrouterfrom the shared OpenAI base, and collapse the base into a thinner shim over theopenaiSDK. (#545)Three changes that ship together:
1. Rename
@tanstack/ai-openai-compatible→@tanstack/openai-base. The previous name implied a multi-vendor protocol surface. After ai-openrouter is decoupled (see below), the only remaining consumers (ai-openai,ai-grok,ai-groq) all back onto theopenaiSDK with a differentbaseURL— "base" describes that role accurately. Imports change:@tanstack/ai-openai-compatible@0.2.xremains published for anyone with a pinned lockfile reference but will receive no further updates.2.
@tanstack/openai-baseadopts theopenaiSDK directly. The previous package vendored ~720 LOC of hand-written wire-format types (ChatCompletion,ResponseStreamEvent, etc.) and exposed abstractcallChatCompletion*/callResponse*hooks subclasses had to implement. Both are gone:openaiagain and imports types directly fromopenai/resources/.... The vendoredsrc/types/directory is removed; consumers that imported wire types from the package (e.g.import type { ResponseInput } from '@tanstack/ai-openai-compatible') should now import from the openai SDK.OpenAIclient (new OpenAIBaseChatCompletionsTextAdapter(model, name, openaiClient)) and callsclient.chat.completions.create/client.responses.createitself. Subclasses (ai-openai,ai-grok,ai-groq) now just construct the SDK with their provider-specificbaseURLand pass it tosuper—callChatCompletion*/callResponse*overrides go away.The other extension hooks (
extractReasoning,extractTextFromResponse,processStreamChunks,makeStructuredOutputCompatible,transformStructuredOutput,mapOptionsToRequest,convertMessage) remain. Groq'sprocessStreamChunksandmakeStructuredOutputCompatibleoverrides (forx_groq.usagepromotion and Groq's structured-output schema quirks) are unchanged.3. Decouple
@tanstack/ai-openrouterfrom the OpenAI base entirely. OpenRouter ships its own SDK (@openrouter/sdk) with a camelCase shape, so inheriting from the OpenAI-shaped base forced a snake_case ↔ camelCase round-trip on every request and stream event. ai-openrouter now extendsBaseTextAdapterdirectly and inlines its own stream processors (OpenRouterTextAdapterfor chat-completions,OpenRouterResponsesTextAdapterfor the Responses beta), reading OpenRouter's camelCase types natively. The@tanstack/openai-baseandopenaidependencies are removed from ai-openrouter; only@openrouter/sdk,@tanstack/ai, and@tanstack/ai-utilsremain.Public API is unchanged:
openRouterText,openRouterResponsesText,createOpenRouterText,createOpenRouterResponsesText, the OpenRouter tool factories, provider routing surface (provider,models,plugins,variant,transforms), app attribution headers (httpReferer,appTitle),:variantmodel suffixing,RequestAbortedErrorpropagation, and the OpenRouter-specific structured-output null-preservation all behave the same. The ~300 LOC of inbound/outbound shape converters (toOpenRouterRequest,toChatCompletion,adaptOpenRouterStreamChunks,toSnakeResponseResult, …) are gone.ai-ollamaremains onBaseTextAdapterdirectly — its native API uses a different wire format from Chat Completions and was never on the shared base.Internal: drop the remaining duck-typed
as { ... }casts on stream chunks inOpenRouterResponsesTextAdapter. Five sites (response.created/in_progress/incomplete/failedmodel + error capture,response.content_part.added/donepayload, and theresponse.completedfunction-call detection) now narrow via the SDK's discriminated unions directly. Behaviourally identical; reduces the chance of a SDK type rename silently slipping past us. (#545)Unify the summarize subsystem on a shared chat-stream wrapper, plumb
modelOptionsthrough end-to-end, and tighten theTProviderOptionsgeneric. (#545)Provider summarize adapters now share one implementation. Anthropic, Gemini, Ollama, and OpenRouter previously each shipped a bespoke 200–300 LOC summarize adapter that re-implemented streaming, error handling, usage accounting, and chunk assembly on top of their text adapter. They now construct a
ChatStreamSummarizeAdapter(formerlyChatStreamWrapperAdapter, renamed and exported from@tanstack/ai/activities) wrapping their own text adapter, matching the existing OpenAI/Grok pattern. Removes ~600 LOC of duplicated logic across the six providers and ensures behavioural parity.SummarizationOptions.modelOptionsnow reaches the wire. Previously the activity layer (runSummarize/runStreamingSummarize) silently droppedmodelOptionswhen building the internalSummarizationOptionsit forwarded to the adapter. Provider-specific knobs (Anthropic cache control, OpenRouter plugins, Gemini safety settings, Groq tuning params, …) now flow through correctly.Provider summarize types resolve from the wrapped text adapter. Each provider previously shipped a bespoke
XSummarizeProviderOptionsinterface (a partial copy of its text provider options). Those interfaces are removed; summarize provider options are now inferred from the text adapter's~typesvia the newInferTextProviderOptions<TAdapter>helper exported from@tanstack/ai/activities. IntelliSense formodelOptionsonsummarize({ adapter: openai('gpt-4o'), … })now matches whatchat({ adapter: openai('gpt-4o'), … })would show.SummarizeAdapterinterface methods are now generic inTProviderOptions.summarizeandsummarizeStreampreviously tookSummarizationOptions(defaulted, somodelOptionswas effectivelyRecord<string, any>regardless of the adapter's typed shape). They now takeSummarizationOptions<TProviderOptions>, threading the class'sTProviderOptionsgeneric through. Source-compatible for callers that didn't specify the generic; type-tighter for implementers and downstream consumers.Default aligned across the summarize surface.
SummarizationOptions,SummarizeAdapter,BaseSummarizeAdapter, andChatStreamSummarizeAdapterpreviously had a mixedRecord<string, any>/Record<string, unknown>/objectset of defaults forTProviderOptions. They now uniformly default toRecord<string, unknown>so unparameterised consumers narrow before indexed access onmodelOptions. Theextends objectconstraint is unchanged — per-model typed interfaces (e.g.OpenAIBaseOptions & OpenAIReasoningOptions & ...) inferred viaInferTextProviderOptions<TAdapter>continue to satisfy it without needing a string index signature. No public-surface signature change for callers that supply a concrete provider-options shape (every shipping adapter does).Bespoke
*SummarizeProviderOptionsinterfaces (e.g.OpenAISummarizeProviderOptions,AnthropicSummarizeProviderOptions,GeminiSummarizeProviderOptions,OllamaSummarizeProviderOptions,OpenRouterSummarizeProviderOptions) are removed from the provider packages' public exports. Consumers who imported them should switch to inferring the type from the adapter (InferTextProviderOptions<typeof adapter>) or remove the explicit annotation (it'll be inferred from the adapter argument).Updated dependencies [
0f17a38]:@tanstack/ai-preact@0.6.23
Patch Changes
0f17a38]:@tanstack/ai-react@0.8.3
Patch Changes
0f17a38]:@tanstack/ai-solid@0.7.3
Patch Changes
0f17a38]:@tanstack/ai-svelte@0.7.3
Patch Changes
0f17a38]:@tanstack/ai-vue@0.7.3
Patch Changes
0f17a38]:@tanstack/ai-vue-ui@0.1.34
Patch Changes
@tanstack/preact-ai-devtools@0.1.32
Patch Changes
@tanstack/react-ai-devtools@0.2.32
Patch Changes
@tanstack/solid-ai-devtools@0.2.32
Patch Changes
ts-svelte-chat@0.1.42
Patch Changes
0f17a38,0f17a38]:ts-vue-chat@0.1.42
Patch Changes
0f17a38,0f17a38]:vanilla-chat@0.0.38
Patch Changes
@tanstack/ai-code-mode-models-eval@0.0.16
Patch Changes
0f17a38,0f17a38]: