feat(semconv): migrating span attributes to OTel gen_ai convention#3809
feat(semconv): migrating span attributes to OTel gen_ai convention#3809max-deygin-traceloop wants to merge 12 commits intomainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdds a migration guide and renames many LLM_* span attributes to GEN_AI_* variants, normalizes GenAISystem enum values to spec-aligned lowercase/vendor strings, restructures cache attribute paths, updates a meter name, adds compliance tests, and bumps package version and dependency constraints for opentelemetry-semantic-conventions-ai v0.5.0. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
saivedant169
left a comment
There was a problem hiding this comment.
This is a significant change — lowercasing all GenAISystem enum values to align with OTel semantic conventions. A few things to watch for:
-
Breaking change for existing users: Anyone filtering spans by
gen_ai.system == "Anthropic"in their observability backend (Datadog, Honeycomb, etc.) will stop matching after this change. Historical data in those backends will have the old PascalCase values. Is there a migration guide or deprecation period planned? -
WATSONX = "ibm.watsonx.ai"andAZURE = "az.ai.openai": These follow the OTel spec naming, but they're a much bigger change than just lowercasing — the actual string values are completely different from the previous"Watsonx"/"Azure". Any documentation pointing to these values will need updating. -
GEN_AI_USAGE_CACHE_CREATION_INPUT_TOKENSchanged fromgen_ai.usage.cache_creation_input_tokenstogen_ai.usage.cache_creation.input_tokens(added dot separator). This is also a data-breaking change for anyone querying this attribute.
The alignment with OTel spec is the right direction — just want to make sure the downstream impact is documented.
e430fd6 to
0141f4c
Compare
…GEN_AI_ - Remove 20 SpanAttributes constants that duplicate the OTel upstream gen_ai_attributes module; consumers should import from there directly - Rename remaining LLM_* constants to GEN_AI_* prefix (25 renames) - Rename watsonx-specific LLM_* → GEN_AI_WATSONX_* (5 renames) - Update _testing.py to reflect the new constant names and add TestSpanAttributesOldNamesGone and TestSpanAttributesGENAIRenamed - Add tests/test_semconv_compliance.py Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
740ee81 to
6401889
Compare
OzBenSimhonTraceloop
left a comment
There was a problem hiding this comment.
Great job, two comments, other then that lgtm.
@galkleinman can u please review as well?
| | Enum | Old value | New value | | ||
| |---|---|---| | ||
| | `GenAISystem.ANTHROPIC` | `"Anthropic"` | `"anthropic"` | |
There was a problem hiding this comment.
Please add the full list here, this MD should contain everything changed
Co-authored-by: OzBenSimhon <168641541+OzBenSimhonTraceloop@users.noreply.github.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
packages/opentelemetry-semantic-conventions-ai/MIGRATION.md (1)
108-116:⚠️ Potential issue | 🟡 MinorDocument all
GenAISystemenum value changes, not justANTHROPIC.The PR objectives and
__init__.pyshow multiple enum value changes (e.g.,WATSONX→"ibm.watsonx.ai",AWS→"aws.bedrock",AZURE→"az.ai.openai","gcp.gen_ai",MISTRALAI→"mistral_ai"), but onlyANTHROPICis documented here. Users filtering on these values will also need to update their dashboards and queries.📝 Suggested addition
### `GenAISystem.ANTHROPIC` value | Enum | Old value | New value | |---|---|---| | `GenAISystem.ANTHROPIC` | `"Anthropic"` | `"anthropic"` | +| `GenAISystem.WATSONX` | `"Watsonx"` | `"ibm.watsonx.ai"` | +| `GenAISystem.AWS` | `"AWS"` | `"aws.bedrock"` | +| `GenAISystem.AZURE` | `"Azure"` | `"az.ai.openai"` | +| `GenAISystem.GOOGLE` | `"Google"` | `"gcp.gen_ai"` | +| `GenAISystem.MISTRALAI` | `"MistralAI"` | `"mistral_ai"` | > **Dashboard impact**: Update dashboards filtering on `gen_ai.system == "Anthropic"` to use -> `gen_ai.system == "anthropic"`. +> `gen_ai.system == "anthropic"`. Similar updates are needed for Watsonx, AWS, Azure, Google, and MistralAI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md` around lines 108 - 116, The migration doc only lists the GenAISystem.ANTHROPIC change; update MIGRATION.md to document all enum value changes shown in __init__.py by adding rows for GenAISystem.WATSONX -> "ibm.watsonx.ai", GenAISystem.AWS -> "aws.bedrock", GenAISystem.AZURE -> "az.ai.openai", GenAISystem.GOOGLE -> "gcp.gen_ai", and GenAISystem.MISTRALAI -> "mistral_ai" (in the same table format as ANTHROPIC) and add a note advising users to update dashboards/queries that filter on gen_ai.system accordingly.
🧹 Nitpick comments (1)
packages/opentelemetry-semantic-conventions-ai/tests/test_semconv_compliance.py (1)
1-6: Duplicate test module—consider consolidating.Both
test_span_attributes.pyandtest_semconv_compliance.pyimport*from_testing.py, meaning pytest will discover and run the sameTest*classes twice. Consider keeping only one of these files to avoid redundant test execution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-semantic-conventions-ai/tests/test_semconv_compliance.py` around lines 1 - 6, The two test modules (test_span_attributes.py and test_semconv_compliance.py) both "from opentelemetry.semconv_ai._testing import *" causing pytest to collect the same Test* classes twice; fix by removing or consolidating one of the duplicate files (either delete test_semconv_compliance.py or test_span_attributes.py) OR change test_semconv_compliance.py so it does not import all Test* (instead import only the explicit functions/classes you need or import the module without wildcard), or alternatively add an explicit __all__ in _testing.py to prevent re-exporting test classes; update references to the unique identifiers (_testing.py and the Test* classes) accordingly to ensure each Test* class is defined/imported exactly once.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py`:
- Line 14: Remove the unused import "gen_ai_attributes" aliased as otel_gen_ai
from opentelemetry/semconv_ai/_testing.py since it triggers F401; delete the
line "from opentelemetry.semconv._incubating.attributes import gen_ai_attributes
as otel_gen_ai" and rely on tests to import GenAiSystemValues locally where
needed (no other code changes required).
---
Duplicate comments:
In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md`:
- Around line 108-116: The migration doc only lists the GenAISystem.ANTHROPIC
change; update MIGRATION.md to document all enum value changes shown in
__init__.py by adding rows for GenAISystem.WATSONX -> "ibm.watsonx.ai",
GenAISystem.AWS -> "aws.bedrock", GenAISystem.AZURE -> "az.ai.openai",
GenAISystem.GOOGLE -> "gcp.gen_ai", and GenAISystem.MISTRALAI -> "mistral_ai"
(in the same table format as ANTHROPIC) and add a note advising users to update
dashboards/queries that filter on gen_ai.system accordingly.
---
Nitpick comments:
In
`@packages/opentelemetry-semantic-conventions-ai/tests/test_semconv_compliance.py`:
- Around line 1-6: The two test modules (test_span_attributes.py and
test_semconv_compliance.py) both "from opentelemetry.semconv_ai._testing import
*" causing pytest to collect the same Test* classes twice; fix by removing or
consolidating one of the duplicate files (either delete
test_semconv_compliance.py or test_span_attributes.py) OR change
test_semconv_compliance.py so it does not import all Test* (instead import only
the explicit functions/classes you need or import the module without wildcard),
or alternatively add an explicit __all__ in _testing.py to prevent re-exporting
test classes; update references to the unique identifiers (_testing.py and the
Test* classes) accordingly to ensure each Test* class is defined/imported
exactly once.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: feafc8a5-3fa3-4f24-a929-07f965edfb90
⛔ Files ignored due to path filters (1)
packages/opentelemetry-semantic-conventions-ai/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
packages/opentelemetry-semantic-conventions-ai/MIGRATION.mdpackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.pypackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.pypackages/opentelemetry-semantic-conventions-ai/tests/test_semconv_compliance.pypackages/opentelemetry-semantic-conventions-ai/tests/test_span_attributes.py
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py
Outdated
Show resolved
Hide resolved
…er package
- Bump opentelemetry-semantic-conventions-ai to 0.5.0
- Add missing entries to MIGRATION.md:
- LLM_USAGE_CACHE_CREATION_INPUT_TOKENS and LLM_USAGE_CACHE_READ_INPUT_TOKENS
renamed to GEN_AI_* equivalents (section 2)
- LLM_STREAMING_TIME_TO_GENERATE removed from SpanAttributes / value changed in Meters
- Full GenAISystem enum value table (all 15 values changed, not just Anthropic)
- Pin opentelemetry-instrumentation-writer to semconv-ai<0.5.0 (was missing upper bound)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md`:
- Around line 142-145: The blockquote starting around the lines referencing
`GenAISystem.OPENAI` and the note about **Dashboard impact** contains an extra
blank line causing markdownlint MD028; remove the blank line so the entire note
remains a continuous blockquote (i.e., ensure each quoted line begins with `>`
and there are no empty lines between them) so the blockquote spans both the
`GenAISystem.OPENAI` sentence and the **Dashboard impact** paragraph without
interruption.
- Around line 112-113: Change the incorrect mention of
SpanAttributes.LLM_STREAMING_TIME_TO_GENERATE to
Meters.LLM_STREAMING_TIME_TO_GENERATE on the affected line so the text correctly
references the symbol's actual container; update the sentence to state that
Meters.LLM_STREAMING_TIME_TO_GENERATE has been updated and that its string value
changed, ensuring surrounding references (the section header and migration
table) remain consistent with Meters.LLM_STREAMING_TIME_TO_GENERATE.
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py`:
- Around line 326-331: The test_all_values_lowercase test is incorrectly
skipping enum values that contain "." by filtering them out; update the test so
it validates every GenAISystem member's value is lowercase regardless of whether
it contains a dot (remove the "and '.' not in member.value" exclusion) so the
list comprehension uses only "member.value != member.value.lower()" to collect
non-lowercase values and surface any dotted uppercase values as failures.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8946b19f-a45e-4ef3-a707-10f6c0930393
📒 Files selected for processing (4)
packages/opentelemetry-instrumentation-writer/pyproject.tomlpackages/opentelemetry-semantic-conventions-ai/MIGRATION.mdpackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.pypackages/opentelemetry-semantic-conventions-ai/pyproject.toml
✅ Files skipped from review due to trivial changes (1)
- packages/opentelemetry-semantic-conventions-ai/pyproject.toml
| `SpanAttributes.LLM_STREAMING_TIME_TO_GENERATE` has been **removed**. The same name exists in | ||
| `Meters` but its string value also changed. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify where LLM_STREAMING_TIME_TO_GENERATE is defined (Meters vs SpanAttributes)
rg -n -C3 'LLM_STREAMING_TIME_TO_GENERATE|class Meters|class SpanAttributes' packages/opentelemetry-semantic-conventions-aiRepository: traceloop/openllmetry
Length of output: 5105
Replace SpanAttributes with Meters on line 112.
The section header (line 110) and migration table (line 117) both reference Meters.LLM_STREAMING_TIME_TO_GENERATE, but line 112 incorrectly mentions SpanAttributes.LLM_STREAMING_TIME_TO_GENERATE. This symbol only exists in the Meters class, not SpanAttributes, and creates confusion during migration.
Update line 112 to say "Meters.LLM_STREAMING_TIME_TO_GENERATE has been updated. Its string value changed" (or similar phrasing that aligns with the actual symbol location and context).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md` around lines 112
- 113, Change the incorrect mention of
SpanAttributes.LLM_STREAMING_TIME_TO_GENERATE to
Meters.LLM_STREAMING_TIME_TO_GENERATE on the affected line so the text correctly
references the symbol's actual container; update the sentence to state that
Meters.LLM_STREAMING_TIME_TO_GENERATE has been updated and that its string value
changed, ensuring surrounding references (the section header and migration
table) remain consistent with Meters.LLM_STREAMING_TIME_TO_GENERATE.
| def test_all_values_lowercase(self): | ||
| non_lowercase = [ | ||
| member.name | ||
| for member in GenAISystem | ||
| if member.value != member.value.lower() and "." not in member.value | ||
| ] |
There was a problem hiding this comment.
Lowercase validation skips dotted enum values.
Line 330 excludes values containing ".", so uppercase dotted values would incorrectly pass this test despite the class contract saying all values must be lowercase.
🔧 Proposed fix
non_lowercase = [
member.name
for member in GenAISystem
- if member.value != member.value.lower() and "." not in member.value
+ if member.value != member.value.lower()
]Based on learnings: Follow the OpenTelemetry GenAI semantic specification at https://opentelemetry.io/docs/specs/semconv/gen-ai/
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/_testing.py`
around lines 326 - 331, The test_all_values_lowercase test is incorrectly
skipping enum values that contain "." by filtering them out; update the test so
it validates every GenAISystem member's value is lowercase regardless of whether
it contains a dot (remove the "and '.' not in member.value" exclusion) so the
list comprehension uses only "member.value != member.value.lower()" to collect
non-lowercase values and surface any dotted uppercase values as failures.
Meters are not actively used; the LLM_STREAMING_TIME_TO_GENERATE string value change is irrelevant noise for consumers. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
Outdated
Show resolved
Hide resolved
…al value Meters are not in active use; revert the string value change to avoid unintended breakage. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/opentelemetry-semantic-conventions-ai/MIGRATION.md (1)
133-136:⚠️ Potential issue | 🟡 MinorFix MD028: remove the blank line inside this blockquote.
There is a blank quoted-line break between the two blockquote paragraphs, which triggers markdownlint MD028.
Suggested markdown fix
> `GenAISystem.OPENAI` (`"openai"`) is unchanged. - > **Dashboard impact**: Update dashboards, alerts, and OTLP processors that filter on > `gen_ai.system` to use the new lowercase values shown above.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md` around lines 133 - 136, Remove the blank quoted-line inside the blockquote that contains "`GenAISystem.OPENAI` (`\"openai\"`) is unchanged." and the following "**Dashboard impact**: Update dashboards, alerts, and OTLP processors that filter on `gen_ai.system` to use the new lowercase values shown above." — edit the blockquote so the two paragraphs are directly adjacent (no empty quote line) to satisfy MD028.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md`:
- Around line 152-161: The snippet uses GenAIAttributes (and
GEN_AI_TOOL_DEFINITIONS in span.set_attribute) but never imports it; add an
import for GenAIAttributes before the example (e.g., import GenAIAttributes from
the package that exports it such as opentelemetry_semantic_conventions_ai) so
the After example runs as-is and the
span.set_attribute(GenAIAttributes.GEN_AI_TOOL_DEFINITIONS, ...) reference
resolves.
In
`@packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py`:
- Around line 137-141: The Watsonx attribute constants were renamed
(GEN_AI_WATSONX_DECODING_METHOD, GEN_AI_WATSONX_RANDOM_SEED,
GEN_AI_WATSONX_MAX_NEW_TOKENS, GEN_AI_WATSONX_MIN_NEW_TOKENS,
GEN_AI_WATSONX_REPETITION_PENALTY) but the Watsonx instrumentation still
references the old SpanAttributes names (LLM_DECODING_METHOD, LLM_RANDOM_SEED,
LLM_MAX_NEW_TOKENS, LLM_MIN_NEW_TOKENS, LLM_REPETITION_PENALTY), causing
AttributeError at runtime; fix this by either updating the instrumentation to
use the new GEN_AI_WATSONX_* constants (replace references to
SpanAttributes.LLM_* with SpanAttributes.GEN_AI_WATSONX_*) or add temporary
compatibility aliases in SpanAttributes (define LLM_DECODING_METHOD =
GEN_AI_WATSONX_DECODING_METHOD, LLM_RANDOM_SEED = GEN_AI_WATSONX_RANDOM_SEED,
LLM_MAX_NEW_TOKENS = GEN_AI_WATSONX_MAX_NEW_TOKENS, LLM_MIN_NEW_TOKENS =
GEN_AI_WATSONX_MIN_NEW_TOKENS, LLM_REPETITION_PENALTY =
GEN_AI_WATSONX_REPETITION_PENALTY) so existing instrumentation continues to work
until dependents are updated.
---
Duplicate comments:
In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md`:
- Around line 133-136: Remove the blank quoted-line inside the blockquote that
contains "`GenAISystem.OPENAI` (`\"openai\"`) is unchanged." and the following
"**Dashboard impact**: Update dashboards, alerts, and OTLP processors that
filter on `gen_ai.system` to use the new lowercase values shown above." — edit
the blockquote so the two paragraphs are directly adjacent (no empty quote line)
to satisfy MD028.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c313c2de-300e-474c-ad43-177a4c4845db
📒 Files selected for processing (2)
packages/opentelemetry-semantic-conventions-ai/MIGRATION.mdpackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
| import json | ||
| tool_defs = [ | ||
| { | ||
| "name": "my_tool", | ||
| "description": "Does something", | ||
| "parameters": {...}, | ||
| } | ||
| ] | ||
| span.set_attribute(GenAIAttributes.GEN_AI_TOOL_DEFINITIONS, json.dumps(tool_defs)) | ||
| ``` |
There was a problem hiding this comment.
Migration snippet is incomplete: GenAIAttributes is referenced without import.
The “After” example will fail when copied as-is because GenAIAttributes is undefined.
Suggested doc fix
# After — one JSON array attribute
import json
+from opentelemetry.semconv._incubating.attributes import gen_ai_attributes as GenAIAttributes
tool_defs = [
{
"name": "my_tool",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/opentelemetry-semantic-conventions-ai/MIGRATION.md` around lines 152
- 161, The snippet uses GenAIAttributes (and GEN_AI_TOOL_DEFINITIONS in
span.set_attribute) but never imports it; add an import for GenAIAttributes
before the example (e.g., import GenAIAttributes from the package that exports
it such as opentelemetry_semantic_conventions_ai) so the After example runs
as-is and the span.set_attribute(GenAIAttributes.GEN_AI_TOOL_DEFINITIONS, ...)
reference resolves.
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
Show resolved
Hide resolved
…sence check Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
galkleinman
left a comment
There was a problem hiding this comment.
Just make sure every instrumentation has the upper bound for the semconv pacakge of <0.5.0 (like you did in writer instrumentation).
Other than that and my other comment, looks great!
Just bump the version in the version file, and i'll merge it and release the new version later this noon.
| [project] | ||
| name = "opentelemetry-semantic-conventions-ai" | ||
| version = "0.4.15" | ||
| version = "0.5.0" |
There was a problem hiding this comment.
@galkleinman all other packages already had the bounds |
What
Migrates all
llm.*attribute string values inopentelemetry-semantic-conventions-aito thegen_ai.*namespace,aligning with the OTel GenAI semantic conventions spec.
This is a breaking change in emitted attribute names. All downstream instrumentation packages must be updated
before this merges to
main.SpanAttributes changes
LLM_REQUEST_TYPEllm.request.typegen_ai.operation.nameLLM_USAGE_TOTAL_TOKENSllm.usage.total_tokensgen_ai.usage.total_tokensLLM_USAGE_TOKEN_TYPEllm.usage.token_typegen_ai.usage.token_typeLLM_USAGE_CACHE_READ_INPUT_TOKENSgen_ai.usage.cache_read_input_tokensgen_ai.usage.cache_read.input_tokens|
|
LLM_USAGE_CACHE_CREATION_INPUT_TOKENS|gen_ai.usage.cache_creation_input_tokens|gen_ai.usage.cache_creation.input_tokens||
LLM_IS_STREAMING|llm.is_streaming|gen_ai.is_streaming||
LLM_FREQUENCY_PENALTY|llm.frequency_penalty|gen_ai.request.frequency_penalty||
LLM_PRESENCE_PENALTY|llm.presence_penalty|gen_ai.request.presence_penalty||
LLM_TOP_K|llm.top_k|gen_ai.request.top_k||
LLM_CHAT_STOP_SEQUENCES|llm.chat.stop_sequences|gen_ai.request.stop_sequences||
LLM_REQUEST_FUNCTIONS|llm.request.functions|gen_ai.tool.definitions||
LLM_REQUEST_REPETITION_PENALTY|llm.request.repetition_penalty|gen_ai.request.repetition_penalty||
LLM_REQUEST_REASONING_EFFORT|llm.request.reasoning_effort|gen_ai.request.reasoning_effort||
LLM_USAGE_REASONING_TOKENS|llm.usage.reasoning_tokens|gen_ai.usage.reasoning_tokens||
LLM_RESPONSE_FINISH_REASON|llm.response.finish_reason|gen_ai.response.finish_reason||
LLM_RESPONSE_STOP_REASON|llm.response.stop_reason|gen_ai.response.stop_reason||
LLM_CONTENT_COMPLETION_CHUNK|llm.content.completion.chunk|gen_ai.content.completion.chunk||
LLM_USER|llm.user|gen_ai.user||
LLM_HEADERS|llm.headers|gen_ai.headers|Meters changes
LLM_STREAMING_TIME_TO_GENERATEllm.chat_completions.streaming_time_to_generategen_ai.client.chat_completions.streaming_time_to_generateGenAISystem enum — value normalization
All values normalized to OTel
GenAiSystemValuesspec strings where a counterpart exists (e.g."Anthropic"→"anthropic","MistralAI"→"mistral_ai","Watsonx"→"ibm.watsonx.ai","Azure"→"az.ai.openai","AWS"→
"aws.bedrock","Google"→"gcp.gen_ai"). Vendors without an OTel counterpart use lowercase-with-underscores.Intentionally not changed
llm.watsonx.*span attributes and metrics —llm.watsonxis a vendor-qualified prefix, not the genericllm.*namespace
llm.openai.*andllm.anthropic.*metrics — vendor-specific, deferred to respective package PRsFull dependency list before merging to main
Instrumentation package PRs (code)
Every package that references the changed constants will silently emit wrong attribute names until updated. Based on
codebase analysis:
openaiLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_IS_STREAMING,LLM_USER,LLM_HEADERS,LLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTY,LLM_REQUEST_FUNCTIONS,LLM_RESPONSE_FINISH_REASON,LLM_USAGE_TOKEN_TYPE,LLM_CONTENT_COMPLETION_CHUNK,LLM_REQUEST_REASONING_EFFORT,LLM_USAGE_REASONING_TOKENS,LLM_STREAMING_TIME_TO_GENERATEanthropicLLM_USAGE_CACHE_CREATION_INPUT_TOKENS,LLM_RESPONSE_FINISH_REASON,LLM_RESPONSE_STOP_REASON,LLM_IS_STREAMING(PR #3808 already landed — verify no regressions)langchainLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_USAGE_CACHE_READ_INPUT_TOKENS,LLM_REQUEST_FUNCTIONSopenai-agentsLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_REQUEST_FUNCTIONScrewaiLLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTYgroqLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_IS_STREAMING,LLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTYcohereLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTY,LLM_REQUEST_FUNCTIONS,LLM_CONTENT_COMPLETION_CHUNKmistralaiLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_IS_STREAMINGollamaLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_IS_STREAMING,LLM_REQUEST_FUNCTIONS,LLM_STREAMING_TIME_TO_GENERATEgoogle-generativeaiLLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTY|
|
vertexai|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTY||
bedrock|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS||
together|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_IS_STREAMING||
writer|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_IS_STREAMING,LLM_CHAT_STOP_SEQUENCES,LLM_STREAMING_TIME_TO_GENERATE||
llamaindex|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_RESPONSE_FINISH_REASON||
haystack|LLM_REQUEST_TYPE,LLM_FREQUENCY_PENALTY,LLM_PRESENCE_PENALTY||
watsonx|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS||
transformers|LLM_REQUEST_TYPE,LLM_REQUEST_REPETITION_PENALTY||
alephalpha|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS||
agno|LLM_USAGE_TOTAL_TOKENS||
traceloop-sdk|LLM_REQUEST_TYPE,LLM_USAGE_TOTAL_TOKENS,LLM_USAGE_CACHE_READ_INPUT_TOKENS,LLM_USAGE_CACHE_CREATION_INPUT_TOKENS|Package versioning
opentelemetry-semantic-conventions-aineeds a major or minor version bump (breaking change in emitted attributenames). All dependent packages that bump their dependency range must also cut new releases.
VCR cassettes
Every instrumentation package listed above has recorded cassettes that assert on the old attribute names. All cassettes
must be re-recorded after the package code is updated.
Infrastructure
llm.*namesneed updating — ideally with a transitional period accepting both old and new names.
llm.request.type,llm.usage.total_tokens,llm.is_streaming, etc. will stop matching after this ships.alongside the release.
window where the semconv package is at the new version but instrumentation packages are still emitting old names.
Package PRs
Summary by CodeRabbit
Breaking Changes
New Features
Tests
Chores