AgentCore is a minimal, composable Go library for building AI agent applications.
go get github.com/voocel/agentcoreA restrained core with open extensibility tends to be more reliable than a complex all-in-one solution. Fewer built-ins, more possibilities.
- Keep
Agent,AgentLoop,Event,Tool, andMessagestable first - Behavioral changes should come with tests first;
examples/and internal implementation details are not stable API
agentcore/ Agent core (types, loop, agent, events, subagent)
agentcore/llm/ LLM adapters (OpenAI, Anthropic, Gemini via litellm)
agentcore/tools/ Built-in tools: read, write, edit, bash
agentcore/memory/ Context compaction — auto-summarize long conversations
Core design:
- Standalone loop (
loop.go) — free function, all dependencies injected via parameters. Double loop: inner processes tool calls + steering, outer handles follow-up - Stateful Agent (
agent.go) — sole consumer of loop events, updates internal state then dispatches to external listeners - Event stream — single
<-chan Eventoutput drives any UI (TUI, Web, Slack, logging) - Two-stage pipeline —
TransformContext(prune/inject) →ConvertToLLM(filter to LLM messages) - SubAgent tool (
subagent.go) — multi-agent via tool invocation, four modes: single, parallel, chain, background - Context compaction (
memory/) — automatic summarization when context approaches window limit
package main
import (
"fmt"
"os"
"github.com/voocel/agentcore"
"github.com/voocel/agentcore/llm"
"github.com/voocel/agentcore/policy"
"github.com/voocel/agentcore/tools"
)
func main() {
model, err := llm.NewOpenAIModel("gpt-5-mini", os.Getenv("OPENAI_API_KEY"))
if err != nil {
panic(err)
}
agent := agentcore.NewAgent(
agentcore.WithModel(model),
agentcore.WithSystemPrompt("You are a helpful coding assistant."),
agentcore.WithTools(
tools.NewRead("."),
tools.NewWrite("."),
tools.NewEdit("."),
tools.NewBash("."),
),
agentcore.WithPermission(policy.WorkspaceProfile(".")),
)
agent.Subscribe(func(ev agentcore.Event) {
if ev.Type == agentcore.EventMessageEnd {
if msg, ok := ev.Message.(agentcore.Message); ok && msg.Role == agentcore.RoleAssistant {
fmt.Println(msg.Content)
}
}
})
agent.Prompt("List the files in the current directory.")
agent.WaitForIdle()
}For a safer default, use policy.ReadOnlyProfile(root) or policy.WorkspaceProfile(root).
Sub-agents are invoked as regular tools with isolated contexts:
model, _ := llm.NewOpenAIModel("gpt-5-mini", apiKey)
scout := agentcore.SubAgentConfig{
Name: "scout",
Description: "Fast codebase reconnaissance",
Model: model,
SystemPrompt: "Quickly explore and report findings. Be concise.",
Tools: []agentcore.Tool{tools.NewRead("."), tools.NewBash(".")},
MaxTurns: 5,
}
worker := agentcore.SubAgentConfig{
Name: "worker",
Description: "General-purpose executor",
Model: model,
SystemPrompt: "Implement tasks given to you.",
Tools: []agentcore.Tool{tools.NewRead("."), tools.NewWrite("."), tools.NewEdit("."), tools.NewBash(".")},
}
agent := agentcore.NewAgent(
agentcore.WithModel(model),
agentcore.WithTools(agentcore.NewSubAgentTool(scout, worker)),
)Four execution modes via tool call:
// Interrupt mid-run (delivered after current tool, remaining tools skipped)
agent.Steer(agentcore.UserMsg("Stop and focus on tests instead."))
// Queue for after the agent finishes
agent.FollowUp(agentcore.UserMsg("Now run the tests."))
// Cancel immediately
agent.Abort()All lifecycle events flow through a single channel — subscribe to drive any UI:
agent.Subscribe(func(ev agentcore.Event) {
switch ev.Type {
case agentcore.EventMessageStart: // assistant starts streaming
case agentcore.EventMessageUpdate: // streaming token delta
case agentcore.EventMessageEnd: // message complete
case agentcore.EventToolExecStart: // tool execution begins
case agentcore.EventToolExecEnd: // tool execution ends
case agentcore.EventError: // error occurred
}
})Long-running tools can emit structured progress updates instead of ad-hoc JSON:
agentcore.ReportToolProgress(ctx, agentcore.ProgressPayload{
Kind: agentcore.ProgressSummary,
Agent: "worker",
Tool: "bash",
Summary: "worker → bash",
})Subscribers should read ev.Progress directly for tool progress updates:
agent.Subscribe(func(ev agentcore.Event) {
if ev.Type == agentcore.EventToolExecUpdate && ev.Progress != nil {
fmt.Printf("[%s] %s\n", ev.Progress.Kind, ev.Progress.Summary)
}
})When a model needs to change at runtime, wrap it with SwappableModel. The swap takes effect on the next call. SubAgentConfig.Model is resolved at the start of each sub-agent run, so the same wrapper also works for sub-agents.
defaultModel, _ := llm.NewOpenAIModel("gpt-5-mini", apiKey)
sw := agentcore.NewSwappableModel(defaultModel)
agent := agentcore.NewAgent(agentcore.WithModel(sw))
nextModel, _ := llm.NewOpenAIModel("gpt-5", apiKey)
sw.Swap(nextModel) // next turn uses the new modelSwap the LLM call with a proxy, mock, or custom implementation:
agent := agentcore.NewAgent(
agentcore.WithStreamFn(func(ctx context.Context, req *agentcore.LLMRequest) (*agentcore.LLMResponse, error) {
// Route to your own proxy/gateway
return callMyProxy(ctx, req)
}),
)Auto-summarize conversation history when approaching the context window limit. Hooks in via TransformContext — zero changes to core:
import "github.com/voocel/agentcore/memory"
agent := agentcore.NewAgent(
agentcore.WithModel(model),
agentcore.WithTransformContext(memory.NewCompaction(memory.CompactionConfig{
Model: model,
ContextWindow: 128000,
})),
agentcore.WithConvertToLLM(memory.CompactionConvertToLLM),
)On each LLM call, compaction checks total tokens. When they exceed ContextWindow - ReserveTokens (default 16384), it:
- Keeps recent messages (default 20000 tokens)
- Summarizes older messages via LLM into a structured checkpoint (Goal / Progress / Key Decisions / Next Steps)
- Tracks file operations (read/write/edit paths) across compacted messages
- Supports incremental updates — subsequent compactions update the existing summary rather than re-summarizing
agent := agentcore.NewAgent(
// Stage 1: prune old messages, inject external context
agentcore.WithTransformContext(func(ctx context.Context, msgs []agentcore.AgentMessage) ([]agentcore.AgentMessage, error) {
if len(msgs) > 100 {
msgs = msgs[len(msgs)-50:]
}
return msgs, nil
}),
// Stage 2: filter to LLM-compatible messages
agentcore.WithConvertToLLM(func(msgs []agentcore.AgentMessage) []agentcore.Message {
var out []agentcore.Message
for _, m := range msgs {
if msg, ok := m.(agentcore.Message); ok {
out = append(out, msg)
}
}
return out
}),
)| Tool | Description |
|---|---|
read |
Read file contents with head truncation (2000 lines / 50KB) |
write |
Write file with auto-mkdir |
edit |
Exact text replacement with fuzzy match, BOM/line-ending normalization, unified diff output |
bash |
Execute shell commands with tail truncation (2000 lines / 50KB) |
Use Inject(msg) when the caller's intent is "deliver this as soon as the current
agent state allows" without manually branching on running vs idle state.
result, err := agent.Inject(agentcore.UserMsg("Re-check unfinished tasks before stopping."))
if err != nil {
panic(err)
}
fmt.Println(result.Disposition)Inject has three outcomes:
steered_current_run: the agent is running, so the message was queued into the current run's steering pathresumed_idle_run: the agent was idle with an assistant-tail conversation, so the message was queued andContinue()was started immediatelyqueued: the message was queued, but no run was started
Use the lower-level APIs when you need stricter control:
Steer(msg): queue for the steering path without any idle auto-resume logicFollowUp(msg): queue for after the current run stops- prompt-side injection: keep this in the application layer if the message must be merged into the next explicit user prompt rather than the agent queues
| Method | Description |
|---|---|
NewAgent(opts...) |
Create agent with options |
Prompt(input) |
Start new conversation turn |
PromptMessages(msgs...) |
Start turn with arbitrary AgentMessages |
Continue() |
Resume from current context |
Inject(msg) |
Deliver message via steer / idle resume / queue, depending on current state |
Steer(msg) |
Inject steering message mid-run |
FollowUp(msg) |
Queue message for after completion |
Abort() |
Cancel current execution |
AbortSilent() |
Cancel without emitting abort marker |
WaitForIdle() |
Block until agent finishes |
Subscribe(fn) |
Register event listener |
State() |
Snapshot of current state |
ExportMessages() |
Export messages for serialization |
ImportMessages(msgs) |
Import deserialized messages |
Apache License 2.0