Skip to content

Conversation

@jacobsimionato
Copy link
Collaborator

  • Make the llm choose from a limited set of template screens, without needing to understand A2UI at all
  • For the restaurant search use case, bundle the tool call and template instantiation into one operation, so it's like 1. LLM inference -> 2. tool call -> 3. substitute into template screen. This way, only one LLM inference is required, and the LLM does not need to regurgitate the restaurant data from the tool call
  • Use Gemini Flash Lite, which is fine for these simple cases. Flash is still pretty fast, using the optimizations above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant