You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: add purge errors strategy to prune tool inputs after failed calls
Introduces a new automatic strategy that removes potentially large tool inputs
from errored tool calls after a configurable number of turns (default: 4).
Error messages are preserved for context while reducing token usage.
- Add PurgeErrors config interface with enabled, turns, and protectedTools
- Implement purgeErrors strategy with turn-based pruning logic
- Integrate into message transform pipeline
- Update documentation with new strategy details
Copy file name to clipboardExpand all lines: README.md
+23-11Lines changed: 23 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,26 +23,34 @@ Restart OpenCode. The plugin will automatically start optimizing your sessions.
23
23
24
24
## How Pruning Works
25
25
26
-
DCP uses multiple strategies to reduce context size:
26
+
DCP uses multiple tools and strategies to reduce context size:
27
+
28
+
### Tools
29
+
30
+
**Discard** — Exposes a `discard` tool that the AI can call to remove completed or noisy tool content from context.
31
+
32
+
**Extract** — Exposes an `extract` tool that the AI can call to distill valuable context into concise summaries before removing the tool content.
33
+
34
+
### Strategies
27
35
28
36
**Deduplication** — Identifies repeated tool calls (e.g., reading the same file multiple times) and keeps only the most recent output. Runs automatically on every request with zero LLM cost.
29
37
30
38
**Supersede Writes** — Prunes write tool inputs for files that have subsequently been read. When a file is written and later read, the original write content becomes redundant since the current file state is captured in the read result. Runs automatically on every request with zero LLM cost.
31
39
32
-
**Discard Tool** — Exposes a `discard` tool that the AI can call to remove completed or noisy tool outputs from context. Use this for task completion cleanup and removing irrelevant outputs.
33
-
34
-
**Extract Tool** — Exposes an `extract` tool that the AI can call to distill valuable context into concise summaries before removing the raw outputs. Use this when you need to preserve key findings while reducing context size.
40
+
**Purge Errors** — Prunes tool inputs for tools that returned errors after a configurable number of turns (default: 4). Error messages are preserved for context, but the potentially large input content is removed. Runs automatically on every request with zero LLM cost.
35
41
36
-
**On Idle Analysis** — Uses a language model to semantically analyze conversation context during idle periods and identify tool outputs that are no longer relevant.
42
+
**On Idle Analysis** — Uses a language model to semantically analyze conversation context during idle periods and identify tool outputs that are no longer relevant. Disabled by default (legacy behavior).
37
43
38
-
Your session history is never modified. DCP replaces pruned outputs with a placeholder before sending requests to your LLM.
44
+
Your session history is never modified—DCP replaces pruned content with placeholders before sending requests to your LLM.
39
45
40
46
## Impact on Prompt Caching
41
47
42
48
LLM providers like Anthropic and OpenAI cache prompts based on exact prefix matching. When DCP prunes a tool output, it changes the message content, which invalidates cached prefixes from that point forward.
43
49
44
50
**Trade-off:** You lose some cache read benefits but gain larger token savings from reduced context size. In most cases, the token savings outweigh the cache miss cost—especially in long sessions where context bloat becomes significant.
45
51
52
+
**Best use case:** Providers that count usage in requests, such as Github Copilot and Google Antigravity have no negative price impact.
53
+
46
54
## Configuration
47
55
48
56
DCP uses its own config file:
@@ -100,6 +108,14 @@ DCP uses its own config file:
100
108
"supersedeWrites": {
101
109
"enabled":true,
102
110
},
111
+
// Prune tool inputs for errored tools after X turns
112
+
"purgeErrors": {
113
+
"enabled":true,
114
+
// Number of turns before errored tool inputs are pruned
115
+
"turns":4,
116
+
// Additional tools to protect from pruning
117
+
"protectedTools": [],
118
+
},
103
119
// (Legacy) Run an LLM to analyze what tool calls are no longer relevant on idle
104
120
"onIdle": {
105
121
"enabled":false,
@@ -127,11 +143,7 @@ When enabled, turn protection prevents tool outputs from being pruned for a conf
127
143
By default, these tools are always protected from pruning across all strategies:
0 commit comments