-
Notifications
You must be signed in to change notification settings - Fork 2.8k
chore: Set role to user if new_message doesn't have role in Runner.run_async()
#2458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Set role to user if new_message doesn't have role in Runner.run_async()
#2458
Conversation
|
Response from ADK Triaging Agent Hello @TanejaAnkisetty, thank you for your contribution! This PR is a bug fix. Could you please associate a GitHub issue with this PR? If there is no existing issue, could you please create one? This information will help reviewers to review your PR more efficiently. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A content without role will also cause invalid event list in session history.
Instead of accommodating this invalid input, I think a better approach would be throw in Runner.run_async.
Side note: GenAI SDK provides a convenient types.UserContent class, which is subclass of types.Content with role='user' preset.
|
Appreciate the quick response. |
|
@Jacksunwei - Updated the code to make the input valid by expecting the |
Runner.run_async() if the passed in content doesn't contain role
Runner.run_async() if the passed in content doesn't contain roleRunner.run_async()
…un_async()` Merge #2458 **Summary** Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM. **Code to replicate the problem** ``` from google.adk.agents import LlmAgent from google.adk.sessions import InMemorySessionService from google.adk.runners import Runner from google.genai.types import Content, Part from google.adk.models.lite_llm import LiteLlm from google.adk.models import LlmRequest from google.genai import types from pydantic import Field import litellm litellm._turn_on_debug() import warnings warnings.filterwarnings("ignore", category=UserWarning, message=".*InMemoryCredentialService.*") import os from dotenv import load_dotenv # Load environment variables from the agent directory's .env file load_dotenv() OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Define agent with output_key root_agent = LlmAgent( name="name_of_agent", model=LiteLlm(model="azure/gpt-4o-mini"), instruction="You are a customer agent to help the users with their concerns." ) # --- Setup Runner and Session --- app_name, user_id, session_id = "state_app", "user1", "session1" session_service = InMemorySessionService() runner = Runner( agent=root_agent, app_name=app_name, session_service=session_service ) print(f"Runner created for agent '{runner.agent.name}'.") session = await session_service.create_session( app_name=app_name, user_id=user_id, session_id=session_id ) # --- Run the Agent --- async def call_agent_async(query: str, runner, user_id, session_id): user_message = Content(parts=[Part(text=query)]) async for event in runner.run_async( user_id=user_id, session_id=session_id, new_message=user_message ): print("event") print(f" [Event]\n Author: {event.author}\n Type: {type(event).__name__}", f"\n Final: {event.is_final_response()}\n Content: {event.content}") return event event = await call_agent_async("What is the capital of India.",runner=runner,user_id=user_id,session_id=session_id) ``` **Before the fix (current adk-python code output)** ``` 00:29:24 - LiteLLM:DEBUG: utils.py:348 - 00:29:24 - LiteLLM:DEBUG: utils.py:348 - Request to litellm: 00:29:24 - LiteLLM:DEBUG: utils.py:348 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'Handle the requests as specified in the System Instruction.'}], tools=None, response_format=None) ``` **After the fix (after resolving the fix)** ``` 00:28:46 - LiteLLM:DEBUG: utils.py:349 - 00:28:46 - LiteLLM:DEBUG: utils.py:349 - Request to litellm: 00:28:46 - LiteLLM:DEBUG: utils.py:349 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'What is the capital of India.'}], tools=None, response_format=None) ``` **Testing** Following unit test is created to test the applied changes and added in the location as suggested in the guidelines. adk-python\tests\unittests\models\test_base_llm.py ``` import pytest from google.genai import types from google.adk.models.llm_request import LlmRequest from google.adk.models.lite_llm import _get_completion_inputs @pytest.mark.parametrize("content_kwargs", [ # Case 1: Explicit role provided {"role": "user", "parts": [types.Part(text="This is an input text from user.")]}, # Case 2: Role omitted, should still be treated as 'user' {"parts": [types.Part(text="This is an input text from user.")]} ]) def test_user_content_role_defaults_to_user(content_kwargs): """ Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. The helper `_get_completion_inputs` should give normalize messages so that explicit 'user' and implicit (missing role) are equivalent. """ llm_request = LlmRequest( contents=[types.Content(**content_kwargs)], config=types.GenerateContentConfig() ) messages, _, _, _ = _get_completion_inputs(llm_request) assert all( msg.get("role") == "user" for msg in messages ), f"Expected role 'user' but got {messages}" assert any( "This is an input text from user." == (msg.get("content") or "") for msg in messages ), f"Expected the user text to be preserved, but got {messages}" ``` COPYBARA_INTEGRATE_REVIEW=#2458 from TanejaAnkisetty:bug/agent-user-content 381b014 PiperOrigin-RevId: 809281620
|
Merged in 78fd480 |
…un_async()` Merge google#2458 **Summary** Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM. **Code to replicate the problem** ``` from google.adk.agents import LlmAgent from google.adk.sessions import InMemorySessionService from google.adk.runners import Runner from google.genai.types import Content, Part from google.adk.models.lite_llm import LiteLlm from google.adk.models import LlmRequest from google.genai import types from pydantic import Field import litellm litellm._turn_on_debug() import warnings warnings.filterwarnings("ignore", category=UserWarning, message=".*InMemoryCredentialService.*") import os from dotenv import load_dotenv # Load environment variables from the agent directory's .env file load_dotenv() OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Define agent with output_key root_agent = LlmAgent( name="name_of_agent", model=LiteLlm(model="azure/gpt-4o-mini"), instruction="You are a customer agent to help the users with their concerns." ) # --- Setup Runner and Session --- app_name, user_id, session_id = "state_app", "user1", "session1" session_service = InMemorySessionService() runner = Runner( agent=root_agent, app_name=app_name, session_service=session_service ) print(f"Runner created for agent '{runner.agent.name}'.") session = await session_service.create_session( app_name=app_name, user_id=user_id, session_id=session_id ) # --- Run the Agent --- async def call_agent_async(query: str, runner, user_id, session_id): user_message = Content(parts=[Part(text=query)]) async for event in runner.run_async( user_id=user_id, session_id=session_id, new_message=user_message ): print("event") print(f" [Event]\n Author: {event.author}\n Type: {type(event).__name__}", f"\n Final: {event.is_final_response()}\n Content: {event.content}") return event event = await call_agent_async("What is the capital of India.",runner=runner,user_id=user_id,session_id=session_id) ``` **Before the fix (current adk-python code output)** ``` 00:29:24 - LiteLLM:DEBUG: utils.py:348 - 00:29:24 - LiteLLM:DEBUG: utils.py:348 - Request to litellm: 00:29:24 - LiteLLM:DEBUG: utils.py:348 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'Handle the requests as specified in the System Instruction.'}], tools=None, response_format=None) ``` **After the fix (after resolving the fix)** ``` 00:28:46 - LiteLLM:DEBUG: utils.py:349 - 00:28:46 - LiteLLM:DEBUG: utils.py:349 - Request to litellm: 00:28:46 - LiteLLM:DEBUG: utils.py:349 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'What is the capital of India.'}], tools=None, response_format=None) ``` **Testing** Following unit test is created to test the applied changes and added in the location as suggested in the guidelines. adk-python\tests\unittests\models\test_base_llm.py ``` import pytest from google.genai import types from google.adk.models.llm_request import LlmRequest from google.adk.models.lite_llm import _get_completion_inputs @pytest.mark.parametrize("content_kwargs", [ # Case 1: Explicit role provided {"role": "user", "parts": [types.Part(text="This is an input text from user.")]}, # Case 2: Role omitted, should still be treated as 'user' {"parts": [types.Part(text="This is an input text from user.")]} ]) def test_user_content_role_defaults_to_user(content_kwargs): """ Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. The helper `_get_completion_inputs` should give normalize messages so that explicit 'user' and implicit (missing role) are equivalent. """ llm_request = LlmRequest( contents=[types.Content(**content_kwargs)], config=types.GenerateContentConfig() ) messages, _, _, _ = _get_completion_inputs(llm_request) assert all( msg.get("role") == "user" for msg in messages ), f"Expected role 'user' but got {messages}" assert any( "This is an input text from user." == (msg.get("content") or "") for msg in messages ), f"Expected the user text to be preserved, but got {messages}" ``` COPYBARA_INTEGRATE_REVIEW=google#2458 from TanejaAnkisetty:bug/agent-user-content 381b014 PiperOrigin-RevId: 809281620
Summary
Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM.
Code to replicate the problem
Before the fix (current adk-python code output)
After the fix (after resolving the fix)
Testing
Following unit test is created to test the applied changes and added in the location as suggested in the guidelines.
adk-python\tests\unittests\models\test_base_llm.py