You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,8 +19,6 @@ Add to your OpenCode config:
19
19
20
20
Using `@latest` ensures you always get the newest version automatically when OpenCode starts.
21
21
22
-
> **Note:** If you use OAuth plugins (e.g., for Google or other services), place this plugin last in your `plugin` array to avoid interfering with their authentication flows.
23
-
24
22
Restart OpenCode. The plugin will automatically start optimizing your sessions.
25
23
26
24
## How Pruning Works
@@ -49,6 +47,8 @@ LLM providers like Anthropic and OpenAI cache prompts based on exact prefix matc
49
47
50
48
**Trade-off:** You lose some cache read benefits but gain larger token savings from reduced context size and performance improvements through reduced context poisoning. In most cases, the token savings outweigh the cache miss cost—especially in long sessions where context bloat becomes significant.
51
49
50
+
> **Note:** In testing, cache hit rates were approximately 65% with DCP enabled vs 85% without.
51
+
52
52
**Best use case:** Providers that count usage in requests, such as Github Copilot and Google Antigravity have no negative price impact.
0 commit comments