refactor: migrate Anthropic provider to @ai-sdk/anthropic#11287
refactor: migrate Anthropic provider to @ai-sdk/anthropic#11287daniel-lxs wants to merge 1 commit intomainfrom
Conversation
Replace the raw @anthropic-ai/sdk implementation with @ai-sdk/anthropic (Vercel AI SDK) for consistency with other providers (Bedrock, DeepSeek, Mistral, etc.). Changes: - Replace Anthropic() client with createAnthropic() from @ai-sdk/anthropic - Replace manual stream parsing with streamText() + processAiSdkStreamPart() - Replace client.messages.create() with generateText() for completePrompt() - Use convertToAiSdkMessages() and convertToolsForAiSdk() for format conversion - Handle prompt caching via AI SDK providerOptions (cacheControl on messages) - Handle extended thinking via providerOptions.anthropic.thinking - Add getThoughtSignature() and getRedactedThinkingBlocks() for thinking signature round-tripping (matching Bedrock pattern, improves on original which had a TODO for this) - Add isAiSdkProvider() returning true - Update tests to mock @ai-sdk/anthropic and ai instead of raw SDK
Clean migration that follows the established AI SDK patterns from the Bedrock provider. Two items to address:
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
| tool_choice: convertOpenAIToolChoiceToAnthropic(metadata?.tool_choice, metadata?.parallelToolCalls), | ||
| this.provider = createAnthropic({ | ||
| baseURL: options.anthropicBaseUrl || undefined, | ||
| ...(useAuthToken ? { authToken: options.apiKey } : { apiKey: options.apiKey ?? "not-provided" }), |
There was a problem hiding this comment.
The ?? "not-provided" fallback changes behavior when options.apiKey is undefined. The old raw SDK code passed apiKey: undefined, which let the @anthropic-ai/sdk fall back to the ANTHROPIC_API_KEY environment variable. Here the literal string "not-provided" will be used as the actual API key, causing a 401 from the Anthropic API instead of a graceful env-var fallback. This affects anyone who configures the API key via environment variable rather than extension settings.
| ...(useAuthToken ? { authToken: options.apiKey } : { apiKey: options.apiKey ?? "not-provided" }), | |
| ...(useAuthToken ? { authToken: options.apiKey } : { apiKey: options.apiKey }), |
Fix it with Roo Code or mention @roomote and request a fix.
| // Prepend system prompt as a system message with cache control | ||
| const systemMessage = { | ||
| role: "system" as const, | ||
| content: systemPrompt, | ||
| providerOptions: { | ||
| anthropic: { cacheControl: { type: "ephemeral" } }, | ||
| }, | ||
| } | ||
|
|
||
| const lastUserMsgIndex = userMsgIndices[userMsgIndices.length - 1] ?? -1 | ||
| const secondLastMsgUserIndex = userMsgIndices[userMsgIndices.length - 2] ?? -1 | ||
|
|
||
| try { | ||
| stream = await this.client.messages.create( | ||
| { | ||
| model: modelId, | ||
| max_tokens: maxTokens ?? ANTHROPIC_DEFAULT_MAX_TOKENS, | ||
| temperature, | ||
| thinking, | ||
| // Setting cache breakpoint for system prompt so new tasks can reuse it. | ||
| system: [{ text: systemPrompt, type: "text", cache_control: cacheControl }], | ||
| messages: sanitizedMessages.map((message, index) => { | ||
| if (index === lastUserMsgIndex || index === secondLastMsgUserIndex) { | ||
| return { | ||
| ...message, | ||
| content: | ||
| typeof message.content === "string" | ||
| ? [{ type: "text", text: message.content, cache_control: cacheControl }] | ||
| : message.content.map((content, contentIndex) => | ||
| contentIndex === message.content.length - 1 | ||
| ? { ...content, cache_control: cacheControl } | ||
| : content, | ||
| ), | ||
| } | ||
| } | ||
| return message | ||
| }), | ||
| stream: true, | ||
| ...nativeToolParams, | ||
| }, | ||
| (() => { | ||
| // prompt caching: https://x.com/alexalbert__/status/1823751995901272068 | ||
| // https://github.com/anthropics/anthropic-sdk-typescript?tab=readme-ov-file#default-headers | ||
| // https://github.com/anthropics/anthropic-sdk-typescript/commit/c920b77fc67bd839bfeb6716ceab9d7c9bbe7393 | ||
|
|
||
| // Then check for models that support prompt caching | ||
| switch (modelId) { | ||
| case "claude-sonnet-4-5": | ||
| case "claude-sonnet-4-20250514": | ||
| case "claude-opus-4-6": | ||
| case "claude-opus-4-5-20251101": | ||
| case "claude-opus-4-1-20250805": | ||
| case "claude-opus-4-20250514": | ||
| case "claude-3-7-sonnet-20250219": | ||
| case "claude-3-5-sonnet-20241022": | ||
| case "claude-3-5-haiku-20241022": | ||
| case "claude-3-opus-20240229": | ||
| case "claude-haiku-4-5-20251001": | ||
| case "claude-3-haiku-20240307": | ||
| betas.push("prompt-caching-2024-07-31") | ||
| return { headers: { "anthropic-beta": betas.join(",") } } | ||
| default: | ||
| return undefined | ||
| } | ||
| })(), | ||
| ) | ||
| } catch (error) { | ||
| TelemetryService.instance.captureException( | ||
| new ApiProviderError( | ||
| error instanceof Error ? error.message : String(error), | ||
| this.providerName, | ||
| modelId, | ||
| "createMessage", | ||
| ), | ||
| ) | ||
| throw error | ||
| // Build streamText request | ||
| const requestOptions: Parameters<typeof streamText>[0] = { | ||
| model: this.provider(modelConfig.id), | ||
| messages: [systemMessage, ...aiSdkMessages], | ||
| temperature: modelConfig.temperature, | ||
| maxOutputTokens: modelConfig.maxTokens ?? ANTHROPIC_DEFAULT_MAX_TOKENS, | ||
| tools: aiSdkTools, | ||
| toolChoice: mapToolChoice(metadata?.tool_choice), | ||
| ...(Object.keys(anthropicProviderOptions).length > 0 && { | ||
| providerOptions: { anthropic: anthropicProviderOptions } as any, | ||
| }), |
There was a problem hiding this comment.
The system prompt is passed as a { role: "system" } message in the messages array with providerOptions for cache control. The established AI SDK pattern in this codebase (see the Bedrock handler) uses the system parameter with systemProviderOptions instead, which is the documented streamText() interface for system-level provider options. With the message-based approach, @ai-sdk/anthropic may not forward providerOptions from the system message to the underlying Anthropic API's cache_control on the system prompt, silently disabling prompt caching and increasing costs.
Consider aligning with the Bedrock pattern:
const requestOptions = {
model: this.provider(modelConfig.id),
system: systemPrompt,
systemProviderOptions: {
anthropic: { cacheControl: { type: "ephemeral" } },
},
messages: aiSdkMessages,
...
};Fix it with Roo Code or mention @roomote and request a fix.
Summary
Migrates the Anthropic provider from the raw
@anthropic-ai/sdkto@ai-sdk/anthropic(Vercel AI SDK) for consistency with other providers (Bedrock, DeepSeek, Mistral, etc.).Changes
src/api/providers/anthropic.tsnew Anthropic()client withcreateAnthropic()from@ai-sdk/anthropicmessage_start,content_block_start,content_block_delta, etc.) withstreamText()+processAiSdkStreamPart()client.messages.create({stream: false})withgenerateText()convertToAiSdkMessages()for format conversionconvertToolsForAiSdk()andmapToolChoice()(replacesconvertOpenAIToolsToAnthropic/convertOpenAIToolChoiceToAnthropic)providerOptions.anthropic.cacheControlon a system message; last 2 user messages cached viaapplyCacheControlToAiSdkMessages()(same walk-in-parallel approach as Bedrock)providerOptions.anthropic.thinkingwithbudgetTokensgetThoughtSignature()andgetRedactedThinkingBlocks()— improves on the original which had a TODO noting signatures required restructuringoutput-128k,context-1m) set at provider creationtruesrc/api/providers/__tests__/anthropic.spec.ts@ai-sdk/anthropicandai(streamText/generateText) instead of raw SDKisAiSdkProvider()src/package.json@ai-sdk/anthropicdependency (^3.0.38)Test Results
Important
Refactor Anthropic provider to use
@ai-sdk/anthropicfor consistency with other providers, updating streaming, completion, and caching logic.@anthropic-ai/sdkto@ai-sdk/anthropic.new Anthropic()withcreateAnthropic()inanthropic.ts.streamText()andgenerateText()for streaming and completion.convertToAiSdkMessages()andconvertToolsForAiSdk()for message and tool conversion.applyCacheControlToAiSdkMessages().getThoughtSignature()andgetRedactedThinkingBlocks()for thinking features.isAiSdkProvider()returnstrue.anthropic.spec.tsto mock@ai-sdk/anthropicandai.@ai-sdk/anthropictopackage.json.This description was created by
for 73ef718. You can customize this summary. It will automatically update as commits are pushed.