Skip to content

fix(core): Improve Vercel AI SDK instrumentation attributes#19717

Open
RulaKhaled wants to merge 11 commits intodevelopfrom
vercelai-issues
Open

fix(core): Improve Vercel AI SDK instrumentation attributes#19717
RulaKhaled wants to merge 11 commits intodevelopfrom
vercelai-issues

Conversation

@RulaKhaled
Copy link
Member

@RulaKhaled RulaKhaled commented Mar 9, 2026

This PR introduces some attributes and fixes to Vercel AI SDK:

Closes #19574

@linear-code
Copy link

linear-code bot commented Mar 9, 2026

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

size-limit report 📦

Path Size % Change Change
@sentry/browser 25.64 kB - -
@sentry/browser - with treeshaking flags 24.14 kB - -
@sentry/browser (incl. Tracing) 42.62 kB - -
@sentry/browser (incl. Tracing, Profiling) 47.28 kB - -
@sentry/browser (incl. Tracing, Replay) 81.42 kB - -
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 71 kB - -
@sentry/browser (incl. Tracing, Replay with Canvas) 86.12 kB - -
@sentry/browser (incl. Tracing, Replay, Feedback) 98.37 kB - -
@sentry/browser (incl. Feedback) 42.45 kB - -
@sentry/browser (incl. sendFeedback) 30.31 kB - -
@sentry/browser (incl. FeedbackAsync) 35.36 kB - -
@sentry/browser (incl. Metrics) 26.92 kB - -
@sentry/browser (incl. Logs) 27.07 kB - -
@sentry/browser (incl. Metrics & Logs) 27.74 kB - -
@sentry/react 27.39 kB - -
@sentry/react (incl. Tracing) 44.95 kB - -
@sentry/vue 30.08 kB - -
@sentry/vue (incl. Tracing) 44.48 kB - -
@sentry/svelte 25.66 kB - -
CDN Bundle 28.27 kB - -
CDN Bundle (incl. Tracing) 43.5 kB - -
CDN Bundle (incl. Logs, Metrics) 29.13 kB - -
CDN Bundle (incl. Tracing, Logs, Metrics) 44.34 kB - -
CDN Bundle (incl. Replay, Logs, Metrics) 68.2 kB - -
CDN Bundle (incl. Tracing, Replay) 80.32 kB - -
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) 81.22 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback) 85.86 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) 86.76 kB - -
CDN Bundle - uncompressed 82.56 kB - -
CDN Bundle (incl. Tracing) - uncompressed 128.5 kB - -
CDN Bundle (incl. Logs, Metrics) - uncompressed 85.43 kB - -
CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed 131.37 kB - -
CDN Bundle (incl. Replay, Logs, Metrics) - uncompressed 209.06 kB - -
CDN Bundle (incl. Tracing, Replay) - uncompressed 245.35 kB - -
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) - uncompressed 248.21 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 258.26 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) - uncompressed 261.11 kB - -
@sentry/nextjs (client) 47.37 kB - -
@sentry/sveltekit (client) 43.07 kB - -
@sentry/node-core 52.27 kB +0.02% +7 B 🔺
@sentry/node 175.19 kB +0.25% +434 B 🔺
@sentry/node - without tracing 97.43 kB +0.02% +12 B 🔺
@sentry/aws-serverless 113.23 kB +0.01% +7 B 🔺

View base workflow run

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 8,974 - 11,440 -22%
GET With Sentry 1,679 19% 2,002 -16%
GET With Sentry (error only) 6,204 69% 7,614 -19%
POST Baseline 1,187 - 1,186 +0%
POST With Sentry 566 48% 595 -5%
POST With Sentry (error only) 1,034 87% 1,043 -1%
MYSQL Baseline 3,305 - 3,993 -17%
MYSQL With Sentry 440 13% 546 -19%
MYSQL With Sentry (error only) 2,704 82% 3,342 -19%

View base workflow run

@RulaKhaled RulaKhaled changed the title fix(core): Resolve fix(core): Add output messages, tool description attributes, and fix media type stripping Mar 10, 2026
@RulaKhaled RulaKhaled changed the title fix(core): Add output messages, tool description attributes, and fix media type stripping fix(core): Improve Vercel AI SDK instrumentation attributes Mar 10, 2026
@RulaKhaled RulaKhaled marked this pull request as ready for review March 10, 2026 11:21
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: V6 tests missing new output messages attribute assertions
    • Added explicit GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE assertions (and import) across the v6 span expectations so gen_ai.output.messages is now validated for text and tool-call outputs.

Create PR

Or push these changes by commenting:

@cursor push 8e0d6cceb7
Preview (8e0d6cceb7)
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
@@ -4,6 +4,7 @@
 import {
   GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
   GEN_AI_OPERATION_NAME_ATTRIBUTE,
+  GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE,
   GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
   GEN_AI_REQUEST_MODEL_ATTRIBUTE,
   GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
@@ -97,6 +98,8 @@
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -129,6 +132,8 @@
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.timestamp': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -231,6 +236,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"Where is the first span?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the first span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"First span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -257,6 +264,8 @@
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]:
             '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"First span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.model': 'mock-model-id',
@@ -289,6 +298,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"Where is the second span?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -324,6 +335,8 @@
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.timestamp': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -346,6 +359,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"What is the weather in San Francisco?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in San Francisco?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"tool_call","id":"call-1","name":"getWeather","arguments":"{\\"location\\":\\"San Francisco\\"}"}],"finish_reason":"tool-calls"}]',
           'vercel.ai.response.finishReason': 'tool-calls',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -371,6 +386,8 @@
           'vercel.ai.pipeline.name': 'generateText.doGenerate',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"tool_call","id":"call-1","name":"getWeather","arguments":"{\\"location\\":\\"San Francisco\\"}"}],"finish_reason":"tool-calls"}]',
           'vercel.ai.prompt.toolChoice': expect.any(String),
           [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
           'vercel.ai.response.finishReason': 'tool-calls',
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Comment on lines +134 to +140
const toolName = span.data[GEN_AI_TOOL_NAME_ATTRIBUTE];
if (typeof toolName === 'string') {
const description = findToolDescription(event.spans, toolName);
if (description) {
span.data[GEN_AI_TOOL_DESCRIPTION_ATTRIBUTE] = description;
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l: Could we extract this into a helper, like we did for applyAccumulatedTokens?

function truncateContentArrayMessage(message: ContentArrayMessage, maxBytes: number): unknown[] {
const { content } = message;

// Find the first text part to truncate
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

m: Why do we only truncate the first text part? Is the assumption that these messages usually only have one text part?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, because this is the most common use case, but we could and should account for more parts, i'll update

*/
function normalizeFinishReason(finishReason: unknown): string {
if (typeof finishReason !== 'string') {
return 'stop';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l: why do we default to stop if nothing is set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because finish_reason is required according to the OTel schema for output messages. https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-output-messages.json

"FinishReason": {
      "enum": [
          "stop",
          "length",
          "content_filter",
          "tool_call",
          "error"
      ]
  }

when the SDK doesn't give us one, 'stop' (normal completion) is the most sensible default assumption.

// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete attributes[AI_RESPONSE_TEXT_ATTRIBUTE];
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete attributes[AI_RESPONSE_TOOL_CALLS_ATTRIBUTE];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l: we do not delete the original finish reason attribute after normalizing here, is that on purpose?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yah finish reason is an independent attribute that was not deprecated by output messages attribute https://getsentry.github.io/sentry-conventions/attributes/gen_ai/#gen_ai-response-finish_reasons

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

} else {
// Subsequent text part doesn't fit: stop here
break;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-text parts before text prevent text truncation

Low Severity

In truncateContentArrayMessage, the includedParts.length === 0 check determines whether to truncate a text part that doesn't fit. However, non-text parts are always pushed into includedParts unconditionally. So if a non-text part appears before a text part in the content array (e.g., [image_part, text_part]), includedParts will already be non-empty when the text part is evaluated. This causes the text to be dropped entirely instead of being truncated, unlike the analogous truncatePartsMessage where all parts go through the same size-check path. The result is a content array with only the non-text part and no text at all, even though a truncated text could have fit.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix Vercel AI Node.js tests

3 participants