-
Notifications
You must be signed in to change notification settings - Fork 139
prototype: support plain a2a via adding images #1890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Tomas Weiss <tomas.weiss2@gmail.com>
Summary of ChangesHello @tomkis, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the system's ability to onboard and manage agents by introducing a dynamic method for retrieving agent metadata. It moves beyond relying solely on Docker image labels, allowing the system to deploy a temporary, isolated environment to query an agent's capabilities directly from its running container. This change improves flexibility and compatibility for integrating diverse agent images into the platform. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for resolving agent cards directly from container images, which is a significant enhancement for plain A2A (Agent-to-Agent) functionality. The changes involve adding a new custom exception for missing agent card labels, implementing Kubernetes probe deployments to fetch agent cards from running containers, and integrating this logic into the provider creation and patching workflows. The code is generally well-structured and includes necessary error handling and retry mechanisms. I've identified a few areas for improvement related to error logging during cleanup, timeout configurability, and naming consistency for interface methods.
| with suppress(Exception): | ||
| await self._deployment_manager.delete_probe_deployment(probe_id=probe_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the comment in provider_deployment_manager.py, suppressing all exceptions during the cleanup of probe deployments can hide important issues. It's recommended to log any exceptions that occur during delete_probe_deployment at a warning level. This ensures that cleanup failures are recorded for debugging purposes without preventing the primary error from being raised.
finally:
with suppress(Exception):
await self._deployment_manager.delete_probe_deployment(probe_id=probe_id)| super().__init__(message, status_code) | ||
|
|
||
|
|
||
| class MissingAgentCardLabelError(ValueError): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably you want to inherit PlatformError and use 4xx status code here
| "propagate": True, | ||
| }, | ||
| "httpx": {"level": logging.WARNING}, | ||
| "procrastinate": {"level": logging.WARNING}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be configurable - discussed offline
| except MissingAgentCardLabelError: | ||
| if isinstance(location, DockerImageProviderLocation): | ||
| logger.info("Missing Agent Card Label, fetching from container") | ||
| agent_card = await self._fetch_agent_card_from_container(location) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can potentially take minutes (includes downloading image ~100MB), consider a separate asynchronous API with polling for unsupported images.
This is how it can work in the CLI:
- POST
/providers-> Missing card label error - POST
/providers/discovery - GET
/providers/discovery/{id}# Poll status
You'd need to add new table for discovery jobs and procrastinate worker to handle this async
Signed-off-by: Tomas Weiss <tomas.weiss2@gmail.com>
Signed-off-by: Tomas Weiss <tomas.weiss2@gmail.com>
Signed-off-by: Tomas Weiss <tomas.weiss2@gmail.com>
Signed-off-by: Tomas Weiss <tomas.weiss2@gmail.com>
4766d0e to
093fd5d
Compare
Signed-off-by: Tomas Weiss <tomas.weiss2@gmail.com>
Summary
Linked Issues
Documentation
If this PR adds new feature or changes existing. Make sure documentation is adjusted accordingly. If the docs is not needed, please explain why.