Send images to Copilot sessions by attaching them as file attachments. The runtime reads the file from disk, converts it to base64 internally, and sends it to the LLM as an image content block — no manual encoding required.
sequenceDiagram
participant App as Your App
participant SDK as SDK Session
participant RT as Copilot Runtime
participant LLM as Vision Model
App->>SDK: send({ prompt, attachments: [{ type: "file", path }] })
SDK->>RT: JSON-RPC with file attachment
RT->>RT: Read file from disk
RT->>RT: Detect image, convert to base64
RT->>RT: Resize if needed (model-specific limits)
RT->>LLM: image_url content block (base64)
LLM-->>RT: Response referencing the image
RT-->>SDK: assistant.message events
SDK-->>App: event stream
| Concept | Description |
|---|---|
| File attachment | An attachment with type: "file" and an absolute path to an image on disk |
| Automatic encoding | The runtime reads the image, converts it to base64, and sends it as an image_url block |
| Auto-resize | The runtime automatically resizes or quality-reduces images that exceed model-specific limits |
| Vision capability | The model must have capabilities.supports.vision = true to process images |
Attach an image file to any message using the file attachment type. The path must be an absolute path to an image on disk.
Node.js / TypeScript
import { CopilotClient } from "@github/copilot-sdk";
const client = new CopilotClient();
await client.start();
const session = await client.createSession({
model: "gpt-4.1",
onPermissionRequest: async () => ({ kind: "approved" }),
});
await session.send({
prompt: "Describe what you see in this image",
attachments: [
{
type: "file",
path: "/absolute/path/to/screenshot.png",
},
],
});Python
from copilot import CopilotClient
from copilot.types import PermissionRequestResult
client = CopilotClient()
await client.start()
session = await client.create_session({
"model": "gpt-4.1",
"on_permission_request": lambda req, inv: PermissionRequestResult(kind="approved"),
})
await session.send({
"prompt": "Describe what you see in this image",
"attachments": [
{
"type": "file",
"path": "/absolute/path/to/screenshot.png",
},
],
})Go
package main
import (
"context"
copilot "github.com/github/copilot-sdk/go"
)
func main() {
ctx := context.Background()
client := copilot.NewClient(nil)
client.Start(ctx)
session, _ := client.CreateSession(ctx, &copilot.SessionConfig{
Model: "gpt-4.1",
OnPermissionRequest: func(req copilot.PermissionRequest, inv copilot.PermissionInvocation) (copilot.PermissionRequestResult, error) {
return copilot.PermissionRequestResult{Kind: copilot.PermissionRequestResultKindApproved}, nil
},
})
path := "/absolute/path/to/screenshot.png"
session.Send(ctx, copilot.MessageOptions{
Prompt: "Describe what you see in this image",
Attachments: []copilot.Attachment{
{
Type: copilot.File,
Path: &path,
},
},
})
}ctx := context.Background()
client := copilot.NewClient(nil)
client.Start(ctx)
session, _ := client.CreateSession(ctx, &copilot.SessionConfig{
Model: "gpt-4.1",
OnPermissionRequest: func(req copilot.PermissionRequest, inv copilot.PermissionInvocation) (copilot.PermissionRequestResult, error) {
return copilot.PermissionRequestResult{Kind: copilot.PermissionRequestResultKindApproved}, nil
},
})
path := "/absolute/path/to/screenshot.png"
session.Send(ctx, copilot.MessageOptions{
Prompt: "Describe what you see in this image",
Attachments: []copilot.Attachment{
{
Type: copilot.File,
Path: &path,
},
},
}).NET
using GitHub.Copilot.SDK;
public static class ImageInputExample
{
public static async Task Main()
{
await using var client = new CopilotClient();
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-4.1",
OnPermissionRequest = (req, inv) =>
Task.FromResult(new PermissionRequestResult { Kind = PermissionRequestResultKind.Approved }),
});
await session.SendAsync(new MessageOptions
{
Prompt = "Describe what you see in this image",
Attachments = new List<UserMessageDataAttachmentsItem>
{
new UserMessageDataAttachmentsItemFile
{
Path = "/absolute/path/to/screenshot.png",
DisplayName = "screenshot.png",
},
},
});
}
}using GitHub.Copilot.SDK;
await using var client = new CopilotClient();
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-4.1",
OnPermissionRequest = (req, inv) =>
Task.FromResult(new PermissionRequestResult { Kind = PermissionRequestResultKind.Approved }),
});
await session.SendAsync(new MessageOptions
{
Prompt = "Describe what you see in this image",
Attachments = new List<UserMessageDataAttachmentsItem>
{
new UserMessageDataAttachmentsItemFile
{
Path = "/absolute/path/to/screenshot.png",
DisplayName = "screenshot.png",
},
},
});Supported image formats include JPG, PNG, GIF, and other common image types. The runtime reads the image from disk and converts it as needed before sending to the LLM. Use PNG or JPEG for best results, as these are the most widely supported formats.
The model's capabilities.limits.vision.supported_media_types field lists the exact MIME types it accepts.
The runtime automatically processes images to fit within the model's constraints. No manual resizing is required.
- Images that exceed the model's dimension or size limits are automatically resized (preserving aspect ratio) or quality-reduced.
- If an image cannot be brought within limits after processing, it is skipped and not sent to the LLM.
- The model's
capabilities.limits.vision.max_prompt_image_sizefield indicates the maximum image size in bytes.
You can check these limits at runtime via the model capabilities object. For the best experience, use reasonably-sized PNG or JPEG images.
Not all models support vision. Check the model's capabilities before sending images.
| Field | Type | Description |
|---|---|---|
capabilities.supports.vision |
boolean |
Whether the model can process image inputs |
capabilities.limits.vision.supported_media_types |
string[] |
MIME types the model accepts (e.g., ["image/png", "image/jpeg"]) |
capabilities.limits.vision.max_prompt_images |
number |
Maximum number of images per prompt |
capabilities.limits.vision.max_prompt_image_size |
number |
Maximum image size in bytes |
interface VisionCapabilities {
vision?: {
supported_media_types: string[];
max_prompt_images: number;
max_prompt_image_size: number; // bytes
};
}vision?: {
supported_media_types: string[];
max_prompt_images: number;
max_prompt_image_size: number; // bytes
};When tools return images (e.g., screenshots or generated charts), the result contains "image" content blocks with base64-encoded data.
| Field | Type | Description |
|---|---|---|
type |
"image" |
Content block type discriminator |
data |
string |
Base64-encoded image data |
mimeType |
string |
MIME type (e.g., "image/png") |
These image blocks appear in tool.execution_complete event results. See the Streaming Events guide for the full event lifecycle.
| Tip | Details |
|---|---|
| Use PNG or JPEG directly | Avoids conversion overhead — these are sent to the LLM as-is |
| Keep images reasonably sized | Large images may be quality-reduced, which can lose important details |
| Use absolute paths | The runtime reads files from disk; relative paths may not resolve correctly |
| Check vision support first | Sending images to a non-vision model wastes tokens on the file path without visual understanding |
| Multiple images are supported | Attach several file attachments in one message, up to the model's max_prompt_images limit |
| Images are not base64 in your code | You provide a file path — the runtime handles encoding, resizing, and format conversion |
| SVG is not supported | SVG files are text-based and excluded from image processing |
- Streaming Events — event lifecycle including tool result content blocks
- Steering & Queueing — sending follow-up messages with attachments