Skip to content

Conversation

@wsa-2002
Copy link

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

1. Link to an existing issue (if applicable):

  • Closes: #issue_number
  • Related: #issue_number

2. Or, if no issue exists, describe the change:

Problem:
When using adk in streaming mode, usage_metadata.prompt_token_count may be None which will emit log
Invalid type NoneType for attribute 'gen_ai.usage.input_tokens' value. Expected one of ['bool', 'str', 'bytes', 'int', 'float'] or a sequence of those types

Solution:
Skip setting span attribute if prompt token count is None

Unit Tests:

  • All unit tests pass locally.

Please include a summary of passed pytest results.

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

@google-cla
Copy link

google-cla bot commented Nov 10, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @wsa-2002, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a specific runtime error encountered in the adk library when operating in streaming mode. Previously, if the prompt_token_count within the usage_metadata was None, it would cause a logging error during telemetry attribute assignment. The changes introduce a defensive check to prevent this NoneType error, ensuring that the system continues to function correctly and log relevant data without interruption, even when token counts are not provided. This enhances the stability and reliability of the telemetry system.

Highlights

  • Bug Fix: Resolved an issue where usage_metadata.prompt_token_count could be None in streaming mode, leading to an Invalid type NoneType error when attempting to log this value as a span attribute.
  • Robustness: Implemented a null check for prompt_token_count before setting the gen_ai.usage.input_tokens span attribute, ensuring that telemetry reporting gracefully handles cases where this value is not available.
  • Testing: Added a new unit test, test_trace_call_llm_with_no_usage_metadata, to specifically cover the scenario where prompt_token_count is None and verify the correct behavior of the tracing logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@wsa-2002 wsa-2002 force-pushed the prompt-token-count-may-be-none-in-streaming-mode branch from 1676512 to d2c4f48 Compare November 10, 2025 07:14
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a TypeError that occurred when prompt_token_count was None in streaming mode. The solution, which involves adding a None check before setting the span attribute, is correct and aligns with existing checks for other metadata fields. The addition of a new unit test, test_trace_call_llm_with_no_usage_metadata, is excellent as it specifically validates this fix, ensuring the code is more robust. The changes are well-contained and address the reported issue properly.

@adk-bot adk-bot added the tracing [Component] This issue is related to OpenTelemetry tracing label Nov 10, 2025
@wsa-2002 wsa-2002 force-pushed the prompt-token-count-may-be-none-in-streaming-mode branch from d2c4f48 to 5ea026e Compare November 10, 2025 07:21
@ryanaiagent ryanaiagent self-assigned this Nov 10, 2025
@ryanaiagent
Copy link
Collaborator

Hi @wsa-2002 , Thanks for your contribution.
It looks like the Pyink formatting check failed. Can you please fix the the linting error.

@wsa-2002 wsa-2002 force-pushed the prompt-token-count-may-be-none-in-streaming-mode branch from 5ea026e to 9751b33 Compare November 14, 2025 03:17
@wsa-2002
Copy link
Author

Hi @ryanaiagent
I've run autoformat in local to fix the error, thanks!

@ryanaiagent
Copy link
Collaborator

Hi @wsa-2002 ,Your PR has been received by the team and is currently under review.
We will provide feedback as soon as we have an update to share.

@ryanaiagent ryanaiagent added the needs review [Status] The PR/issue is awaiting review from the maintainer label Dec 3, 2025
@ryanaiagent
Copy link
Collaborator

Hi @ankursharmas , can you please review this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs review [Status] The PR/issue is awaiting review from the maintainer tracing [Component] This issue is related to OpenTelemetry tracing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants