feat: add beginner-friendly LLM tracing guide example#4129
Conversation
|
|
📝 WalkthroughWalkthroughA new beginner-friendly example script for LLM tracing is added. The script demonstrates Traceloop initialization, a decorated function that calls OpenAI, sample execution with output guidance, and explicit trace flushing. It includes installation instructions and references to supported auto-instrumented libraries. ChangesBeginner-Friendly LLM Tracing Example
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
packages/sample-app/sample_app/beginner_guide_llm_tracing.py (1)
34-37: ⚡ Quick winConsider mentioning ConsoleSpanExporter for local debugging.
For a beginner-friendly guide, it would be helpful to mention
ConsoleSpanExporterfromopentelemetry.sdk.trace.exportas an alternative for local debugging without requiring a Traceloop API key. This aligns with the repository's debugging practices and reduces initial setup friction. As per coding guidelines, ConsoleSpanExporter is the recommended approach for debugging OpenTelemetry spans and hierarchy issues.💡 Example addition to comments
# For local debugging without TRACELOOP_API_KEY, you can use ConsoleSpanExporter: # from opentelemetry.sdk.trace.export import ConsoleSpanExporter # from opentelemetry.sdk.trace import TracerProvider # from opentelemetry.sdk.trace.export import SimpleSpanProcessor # # provider = TracerProvider() # provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` around lines 34 - 37, The example currently always uses Traceloop.init which requires a TraceLoop API key; add a short comment showing the alternative local-debug setup using ConsoleSpanExporter so beginners can run tracing without an API key — mention importing ConsoleSpanExporter, creating a TracerProvider, and attaching a SimpleSpanProcessor (reference symbols: ConsoleSpanExporter, TracerProvider, SimpleSpanProcessor, Traceloop.init) and suggest toggling between the Traceloop.init block and the ConsoleSpanExporter snippet for local debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py`:
- Line 64: The return currently assumes response.choices[0].message.content is
always non-null; add a defensive null check when extracting content from the
OpenAI response (inspect response, response.choices, and
response.choices[0].message.content) and return a safe fallback (e.g., empty
string or a clear error message) or raise a descriptive exception; update the
code around the current return in beginner_guide_llm_tracing.py so it first
verifies that response and response.choices exist and that
response.choices[0].message.content is not None before returning it.
- Line 10: Replace the package install command string "pip install traceloop-sdk
openai" found in beginner_guide_llm_tracing.py with the repository-standard uv
package manager invocation (e.g., "uv install traceloop-sdk openai" or
equivalent uv syntax used elsewhere), ensuring any displayed CLI example or
README-style snippet uses uv instead of pip so it conforms to project package
management guidelines.
- Line 19: Update the run command string "python beginners_guide_llm_tracing.py"
to use the correct filename "python beginner_guide_llm_tracing.py" so the
example matches the actual script name; locate the incorrect command occurrence
(the line containing beginners_guide_llm_tracing.py) and replace it with the
singular "beginner_guide_llm_tracing.py".
---
Nitpick comments:
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py`:
- Around line 34-37: The example currently always uses Traceloop.init which
requires a TraceLoop API key; add a short comment showing the alternative
local-debug setup using ConsoleSpanExporter so beginners can run tracing without
an API key — mention importing ConsoleSpanExporter, creating a TracerProvider,
and attaching a SimpleSpanProcessor (reference symbols: ConsoleSpanExporter,
TracerProvider, SimpleSpanProcessor, Traceloop.init) and suggest toggling
between the Traceloop.init block and the ConsoleSpanExporter snippet for local
debugging.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 43ed0a39-a532-4e9f-994f-4fd2f94d2519
📒 Files selected for processing (1)
packages/sample-app/sample_app/beginner_guide_llm_tracing.py
| requests (tokens, model, prompts, responses) automatically. | ||
|
|
||
| What you need: | ||
| pip install traceloop-sdk openai |
There was a problem hiding this comment.
Use uv package manager for installation commands.
The installation command should use uv instead of pip to align with the repository's package management standards. As per coding guidelines, all package management commands should be executed through the uv package manager.
📦 Proposed fix
- pip install traceloop-sdk openai
+ uv add traceloop-sdk openaiOr if demonstrating a standalone install:
- pip install traceloop-sdk openai
+ uv run pip install traceloop-sdk openai📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pip install traceloop-sdk openai | |
| uv run pip install traceloop-sdk openai |
| pip install traceloop-sdk openai | |
| uv add traceloop-sdk openai |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` at line 10,
Replace the package install command string "pip install traceloop-sdk openai"
found in beginner_guide_llm_tracing.py with the repository-standard uv package
manager invocation (e.g., "uv install traceloop-sdk openai" or equivalent uv
syntax used elsewhere), ensuring any displayed CLI example or README-style
snippet uses uv instead of pip so it conforms to project package management
guidelines.
|
|
||
| Run: | ||
| export TRACELOOP_API_KEY="your-key" | ||
| python beginners_guide_llm_tracing.py |
There was a problem hiding this comment.
Fix filename inconsistency in run command.
The filename in the example command is beginners_guide_llm_tracing.py (plural "beginners"), but the actual file is beginner_guide_llm_tracing.py (singular "beginner").
📝 Proposed fix
- python beginners_guide_llm_tracing.py
+ python beginner_guide_llm_tracing.py📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| python beginners_guide_llm_tracing.py | |
| python beginner_guide_llm_tracing.py |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` at line 19,
Update the run command string "python beginners_guide_llm_tracing.py" to use the
correct filename "python beginner_guide_llm_tracing.py" so the example matches
the actual script name; locate the incorrect command occurrence (the line
containing beginners_guide_llm_tracing.py) and replace it with the singular
"beginner_guide_llm_tracing.py".
| messages=[{"role": "user", "content": question}], | ||
| max_tokens=100, | ||
| ) | ||
| return response.choices[0].message.content |
There was a problem hiding this comment.
Add null check for message content.
The return statement assumes response.choices[0].message.content is never None, but the OpenAI API can return None for content in certain scenarios (e.g., function calls, refusals). For a beginner-friendly guide, demonstrating defensive coding practices would be valuable.
🛡️ Proposed fix with fallback
- return response.choices[0].message.content
+ return response.choices[0].message.content or ""Or with more explicit handling:
- return response.choices[0].message.content
+ content = response.choices[0].message.content
+ if content is None:
+ return ""
+ return content📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return response.choices[0].message.content | |
| return response.choices[0].message.content or "" |
| return response.choices[0].message.content | |
| content = response.choices[0].message.content | |
| if content is None: | |
| return "" | |
| return content |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` at line 64, The
return currently assumes response.choices[0].message.content is always non-null;
add a defensive null check when extracting content from the OpenAI response
(inspect response, response.choices, and response.choices[0].message.content)
and return a safe fallback (e.g., empty string or a clear error message) or
raise a descriptive exception; update the code around the current return in
beginner_guide_llm_tracing.py so it first verifies that response and
response.choices exist and that response.choices[0].message.content is not None
before returning it.
Adds a step-by-step beginner-friendly example for LLM tracing at packages/sample-app/sample_app/beginner_guide_llm_tracing.py. Closes #4069
Summary by CodeRabbit