Skip to content

feat: add beginner-friendly LLM tracing guide example#4129

Open
elsare7now wants to merge 1 commit into
traceloop:mainfrom
elsare7now:hunter3/beginner-tracing-guide
Open

feat: add beginner-friendly LLM tracing guide example#4129
elsare7now wants to merge 1 commit into
traceloop:mainfrom
elsare7now:hunter3/beginner-tracing-guide

Conversation

@elsare7now
Copy link
Copy Markdown

@elsare7now elsare7now commented May 12, 2026

Adds a step-by-step beginner-friendly example for LLM tracing at packages/sample-app/sample_app/beginner_guide_llm_tracing.py. Closes #4069

Summary by CodeRabbit

  • Documentation
    • Added beginner guide example demonstrating end-to-end LLM tracing with OpenAI integration, covering initialization, request execution, and trace visualization in the platform UI.
    • Includes comprehensive reference list of supported auto-instrumented libraries to streamline integration across different frameworks and tools.

Review Change Stack

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 12, 2026

📝 Walkthrough

Walkthrough

A new beginner-friendly example script for LLM tracing is added. The script demonstrates Traceloop initialization, a decorated function that calls OpenAI, sample execution with output guidance, and explicit trace flushing. It includes installation instructions and references to supported auto-instrumented libraries.

Changes

Beginner-Friendly LLM Tracing Example

Layer / File(s) Summary
Documentation and Reference Material
packages/sample-app/sample_app/beginner_guide_llm_tracing.py
Module-level documentation explains the example's purpose, required installation, execution steps, basic flow, and provides follow-up learning paths with a list of auto-instrumented providers and frameworks.
Traceloop Initialization and Client Setup
packages/sample-app/sample_app/beginner_guide_llm_tracing.py
Traceloop is initialized before OpenAI import (as required), batch sending is disabled for immediate visibility, and an OpenAI client is configured using an environment-provided API key.
Traced LLM Function and Execution Flow
packages/sample-app/sample_app/beginner_guide_llm_tracing.py
The @task-decorated ask_llm(question: str) function performs a chat completion request and returns the first message content. The main entrypoint calls the function with a sample question, prints the response, displays user-facing guidance on viewing traces in the Traceloop dashboard, and explicitly flushes pending traces before exit.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 A hop and a bound through traces so bright,
Traceloop initialized, OpenAI in sight!
Task-decorated calls now visible to all,
Beginners tracing LLMs, learning to enthrall! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add beginner-friendly LLM tracing guide example' accurately describes the main change - adding a new beginner-friendly example for LLM tracing.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #4069: includes minimal setup guide with Traceloop initialization, demonstrates basic LLM call with OpenAI client, adds tracing via @task decorator, and shows trace flushing mechanism.
Out of Scope Changes check ✅ Passed All changes are focused on the single new file containing the beginner-friendly LLM tracing example, directly aligned with issue #4069 requirements with no unrelated modifications.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
packages/sample-app/sample_app/beginner_guide_llm_tracing.py (1)

34-37: ⚡ Quick win

Consider mentioning ConsoleSpanExporter for local debugging.

For a beginner-friendly guide, it would be helpful to mention ConsoleSpanExporter from opentelemetry.sdk.trace.export as an alternative for local debugging without requiring a Traceloop API key. This aligns with the repository's debugging practices and reduces initial setup friction. As per coding guidelines, ConsoleSpanExporter is the recommended approach for debugging OpenTelemetry spans and hierarchy issues.

💡 Example addition to comments
# For local debugging without TRACELOOP_API_KEY, you can use ConsoleSpanExporter:
# from opentelemetry.sdk.trace.export import ConsoleSpanExporter
# from opentelemetry.sdk.trace import TracerProvider
# from opentelemetry.sdk.trace.export import SimpleSpanProcessor
# 
# provider = TracerProvider()
# provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` around lines 34
- 37, The example currently always uses Traceloop.init which requires a
TraceLoop API key; add a short comment showing the alternative local-debug setup
using ConsoleSpanExporter so beginners can run tracing without an API key —
mention importing ConsoleSpanExporter, creating a TracerProvider, and attaching
a SimpleSpanProcessor (reference symbols: ConsoleSpanExporter, TracerProvider,
SimpleSpanProcessor, Traceloop.init) and suggest toggling between the
Traceloop.init block and the ConsoleSpanExporter snippet for local debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py`:
- Line 64: The return currently assumes response.choices[0].message.content is
always non-null; add a defensive null check when extracting content from the
OpenAI response (inspect response, response.choices, and
response.choices[0].message.content) and return a safe fallback (e.g., empty
string or a clear error message) or raise a descriptive exception; update the
code around the current return in beginner_guide_llm_tracing.py so it first
verifies that response and response.choices exist and that
response.choices[0].message.content is not None before returning it.
- Line 10: Replace the package install command string "pip install traceloop-sdk
openai" found in beginner_guide_llm_tracing.py with the repository-standard uv
package manager invocation (e.g., "uv install traceloop-sdk openai" or
equivalent uv syntax used elsewhere), ensuring any displayed CLI example or
README-style snippet uses uv instead of pip so it conforms to project package
management guidelines.
- Line 19: Update the run command string "python beginners_guide_llm_tracing.py"
to use the correct filename "python beginner_guide_llm_tracing.py" so the
example matches the actual script name; locate the incorrect command occurrence
(the line containing beginners_guide_llm_tracing.py) and replace it with the
singular "beginner_guide_llm_tracing.py".

---

Nitpick comments:
In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py`:
- Around line 34-37: The example currently always uses Traceloop.init which
requires a TraceLoop API key; add a short comment showing the alternative
local-debug setup using ConsoleSpanExporter so beginners can run tracing without
an API key — mention importing ConsoleSpanExporter, creating a TracerProvider,
and attaching a SimpleSpanProcessor (reference symbols: ConsoleSpanExporter,
TracerProvider, SimpleSpanProcessor, Traceloop.init) and suggest toggling
between the Traceloop.init block and the ConsoleSpanExporter snippet for local
debugging.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 43ed0a39-a532-4e9f-994f-4fd2f94d2519

📥 Commits

Reviewing files that changed from the base of the PR and between 6d3e696 and a61bd68.

📒 Files selected for processing (1)
  • packages/sample-app/sample_app/beginner_guide_llm_tracing.py

requests (tokens, model, prompts, responses) automatically.

What you need:
pip install traceloop-sdk openai
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Use uv package manager for installation commands.

The installation command should use uv instead of pip to align with the repository's package management standards. As per coding guidelines, all package management commands should be executed through the uv package manager.

📦 Proposed fix
-    pip install traceloop-sdk openai
+    uv add traceloop-sdk openai

Or if demonstrating a standalone install:

-    pip install traceloop-sdk openai
+    uv run pip install traceloop-sdk openai
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pip install traceloop-sdk openai
uv run pip install traceloop-sdk openai
Suggested change
pip install traceloop-sdk openai
uv add traceloop-sdk openai
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` at line 10,
Replace the package install command string "pip install traceloop-sdk openai"
found in beginner_guide_llm_tracing.py with the repository-standard uv package
manager invocation (e.g., "uv install traceloop-sdk openai" or equivalent uv
syntax used elsewhere), ensuring any displayed CLI example or README-style
snippet uses uv instead of pip so it conforms to project package management
guidelines.


Run:
export TRACELOOP_API_KEY="your-key"
python beginners_guide_llm_tracing.py
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix filename inconsistency in run command.

The filename in the example command is beginners_guide_llm_tracing.py (plural "beginners"), but the actual file is beginner_guide_llm_tracing.py (singular "beginner").

📝 Proposed fix
-    python beginners_guide_llm_tracing.py
+    python beginner_guide_llm_tracing.py
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
python beginners_guide_llm_tracing.py
python beginner_guide_llm_tracing.py
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` at line 19,
Update the run command string "python beginners_guide_llm_tracing.py" to use the
correct filename "python beginner_guide_llm_tracing.py" so the example matches
the actual script name; locate the incorrect command occurrence (the line
containing beginners_guide_llm_tracing.py) and replace it with the singular
"beginner_guide_llm_tracing.py".

messages=[{"role": "user", "content": question}],
max_tokens=100,
)
return response.choices[0].message.content
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add null check for message content.

The return statement assumes response.choices[0].message.content is never None, but the OpenAI API can return None for content in certain scenarios (e.g., function calls, refusals). For a beginner-friendly guide, demonstrating defensive coding practices would be valuable.

🛡️ Proposed fix with fallback
-    return response.choices[0].message.content
+    return response.choices[0].message.content or ""

Or with more explicit handling:

-    return response.choices[0].message.content
+    content = response.choices[0].message.content
+    if content is None:
+        return ""
+    return content
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return response.choices[0].message.content
return response.choices[0].message.content or ""
Suggested change
return response.choices[0].message.content
content = response.choices[0].message.content
if content is None:
return ""
return content
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/sample-app/sample_app/beginner_guide_llm_tracing.py` at line 64, The
return currently assumes response.choices[0].message.content is always non-null;
add a defensive null check when extracting content from the OpenAI response
(inspect response, response.choices, and response.choices[0].message.content)
and return a safe fallback (e.g., empty string or a clear error message) or
raise a descriptive exception; update the code around the current return in
beginner_guide_llm_tracing.py so it first verifies that response and
response.choices exist and that response.choices[0].message.content is not None
before returning it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🚀 Feature: Suggestion: Add beginner-friendly example for LLM tracing

2 participants