Add Agent Skills for App supervisor api + background mode#183
Add Agent Skills for App supervisor api + background mode#183jennsun wants to merge 3 commits intodatabricks:mainfrom
Conversation
| ### 2. Simulated streaming for the frontend | ||
|
|
||
| The chat frontend expects SSE streaming events. Since background mode currently returns the full text at once, `output_item_to_stream_events()` chunks text into 3-word deltas to simulate a streaming experience. Streaming will be supported soon to fix this. | ||
|
|
There was a problem hiding this comment.
todo: update this once sabhya's changes to support streaming are in
| f"[poll] Skipping incomplete item: " | ||
| f"type={item_dict.get('type')}, status={item_status}" | ||
| ) | ||
| continue |
There was a problem hiding this comment.
this should be a break right? If we have some incomplete in the middle, we dont want to stream the ones after if they are complete because then it will be out of order
| ```python | ||
| TOOLS = [ | ||
| # Genie space — natural language queries over structured data | ||
| { |
There was a problem hiding this comment.
can we change these? We renamed stuff to this tool spec: https://openapi.dev.databricks.com/pr-1737623/api/workspace/supervisoragents/createtool
cc5b14b to
78251a3
Compare
Add two new skills for using the Databricks Supervisor API: - supervisor-api: base skill for hosted tools (Genie, UC functions, KA endpoints, MCP servers) - supervisor-api-background-mode: long-running tasks with polling and simulated streaming Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
78251a3 to
52d3ca4
Compare
- Change chunk size from 3 words to 1 word for smoother streaming - Add 30ms delay between stream event yields for visible streaming effect - Add asyncio import to agent.py - Add quickstart as prerequisite (sets up MLflow experiment + .env) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
dhruv0811
left a comment
There was a problem hiding this comment.
Looks good overall!
Would be great if we could add some external references to these skills to improve discoverability. Having skills which are used by customers to now tell customers about new offerings might be a cool way of marketing this
There was a problem hiding this comment.
It seems the sync-skills.py file wasn't updated with the new skill?
| return None | ||
|
|
||
|
|
||
| def create_supervisor_client( |
There was a problem hiding this comment.
How come we don't forward OBO token x-forwarded-access-token with background mode? Is this not supported yet? I see we are doing this in the other regular supervisor-api skill with _get_client()
There was a problem hiding this comment.
Do you think it might be useful to reference these skills in the base AGENTS.md to improve discovery, perhaps we can also mention there what supervisor API is, and when the agent should suggest it to the user? This would help enable customer discovery of this new offering as well
| workspace_client = WorkspaceClient() | ||
| client = create_supervisor_client(workspace_client) |
There was a problem hiding this comment.
nit: is there a reason we don't cache these at a module level? We do this for the base supervisor-api skill. Constructing a new WorkspaceClient and AsyncDatabricksOpenAI might be a little inefficient?
| ResponsesAgentResponse, | ||
| ) | ||
|
|
||
| mlflow.openai.autolog() |
There was a problem hiding this comment.
Noticed a couple spots where we hardcode openai like this. Is it worth having a separate langchain example??
use supervisor api + background mode skills
background-mode-template.mp4
with mcp tool call approval + mocked streaming
mcp-server-mocked-streaming.online-video-cutter.com.mp4