Skip to content

Allow thinking_config in generate_content_config for LlmAgent #4108

@invictus2010

Description

@invictus2010

Is your feature request related to a problem? Please describe. Yes.
Currently, the library strictly enforces that any thinking_config (such as the
thinking budget) must be configured via the LlmAgent.planner field
(specifically using BuiltInPlanner).

If a user attempts to set thinking_config directly within the
generate_content_config of an LlmAgent,
LlmAgent.validate_generate_content_config raises a ValueError.

This creates friction for two reasons: 1. Boilerplate: Users who simply want
to enable thinking or adjust the budget (which is effectively a model
hyperparameter) are forced to instantiate a full BuiltInPlanner object, adding
unnecessary import overhead and complexity. 2. Architectural Clarity: It
conflates "model parameters" (like temperature, max_output_tokens, and now
thinking_budget) with "agent strategy" (the Planner). Users intuitively
expect model-level settings to reside in generate_content_config.

Describe the solution you'd like I propose relaxing the validation logic in
LlmAgent to allow thinking_config to be set directly in
generate_content_config.

To prevent ambiguity or silent failures, we should implement an "Allow but Warn"
strategy:

  1. Update Validation: Modify LlmAgent.validate_generate_content_config to
    remove the ValueError for thinking_config.
  2. Add Precedence Warning: If both self.planner (with thinking enabled)
    AND generate_content_config.thinking_config are present, issue a
    UserWarning. This informs the user that the Planner's configuration will
    take precedence (due to the order of request processors).
  3. Runtime Logging: Ideally, update BuiltInPlanner.apply_thinking_config
    to log an INFO or WARNING message if it detects it is overwriting an
    existing thinking configuration on the LlmRequest.

Describe alternatives you've considered * Status Quo: Continue enforcing
Planner usage. This maintains strict separation but keeps the developer
experience deeper than necessary for simple thinking model usage. * Silent
Overwrite:
Remove the validation but add no warnings. This is risky because
the _NlPlanningRequestProcessor runs after the basic processor. A user might
set a budget of 2000 in config, have a default planner with budget 1000, and be
confused why their setting isn't working. The warning is essential.

Additional context -

Internal Logic: The underlying flow logic in
src/google/adk/flows/llm_flows/basic.py (_BasicLlmRequestProcessor) already
deep-copies the entire generate_content_config to the LlmRequest. Therefore,
once the validation in llm_agent.py is removed, the parameter will correctly
propagate to the model without further changes to the core flow.

Cross-Language Consistency: A review of the Go implementation
(google/adk-go) shows that it does not enforce this restriction. In
adk-go, ThinkingConfig is allowed within the GenerateContentConfig struct
and is passed through to the model without requiring a separate Planner
abstraction. Bringing the Python implementation in line with Go would improve
ecosystem consistency.

Metadata

Metadata

Assignees

Labels

core[Component] This issue is related to the core interface and implementation

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions