Skip to content

refactor(learnings): externalize per-repo learnings to separate GitHub repo#2

Open
sixty4bit wants to merge 3 commits intoworkos:mainfrom
sixty4bit:fix/externalize-learnings
Open

refactor(learnings): externalize per-repo learnings to separate GitHub repo#2
sixty4bit wants to merge 3 commits intoworkos:mainfrom
sixty4bit:fix/externalize-learnings

Conversation

@sixty4bit
Copy link

Summary

  • Per-repo tactical knowledge (docs/learnings/{repo}.md) no longer lives inside the case repo. Instead, learnings are stored in an external GitHub repo configured via the CASE_LEARNINGS_REPO env var, matching the CASE_ASSETS_REPO pattern from fix(harness): remove hardcoded filesystem paths for portability #1.
  • New scripts read-learning.sh and write-learning.sh handle all interaction with the external repo via gh api (no clone needed).
  • The implementer agent calls read-learning.sh before coding; the retrospective agent calls write-learning.sh to persist tactical knowledge. Harness-level escalations (3+ similar learnings promoting to conventions) still write to the local case repo.
  • Deleted the 5 empty per-repo learnings files. Converted docs/learnings/README.md to setup instructions for the external repo.

Why

Codebase-specific learnings don't belong in an open-source harness repo. Each fork accumulates different knowledge, and user-specific data pollutes the shared repo. Externalizing them means each user/fork has their own learnings repo, and the harness itself stays generic.

Test plan

  • All 17 shell scripts and hooks pass bash -n syntax validation
  • Run read-learning.sh cli without CASE_LEARNINGS_REPO set — should fail with setup instructions
  • Run read-learning.sh cli with env var pointing to a repo with no cli.md — should exit 0 with stderr note
  • Run write-learning.sh cli "- **2026-03-17** — test entry" — should create file and commit
  • Run read-learning.sh cli again — should output the entry
  • Verify no remaining local learnings file references in agent prompts

🤖 Generated with Claude Code

sixty4bit and others added 3 commits March 17, 2026 10:40
All references to /Users/nicknisi/Developer/case have been replaced
with dynamic resolution so the harness works for any user/fork.

Shell scripts and hooks now derive CASE_REPO from their own location
using the SCRIPT_DIR pattern (matching bootstrap.sh). SKILL.md and
agent prompts use ${CASE_REPO} as a variable resolved by the
orchestrator at invocation time. Each agent's Input contract now
includes CASE_REPO as an explicit parameter.

upload-screenshot.sh now requires the CASE_ASSETS_REPO env var
instead of defaulting to a specific user's repo, with clear setup
instructions on failure.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fix(harness): remove hardcoded filesystem paths for portability
…b repo

Per-repo tactical knowledge no longer lives in docs/learnings/{repo}.md
inside the case repo. Instead, learnings are stored in an external GitHub
repo configured via the CASE_LEARNINGS_REPO env var, matching the
CASE_ASSETS_REPO pattern.

New scripts:
- scripts/read-learning.sh: reads learnings via gh api (stdout)
- scripts/write-learning.sh: appends entries and commits via gh api

The implementer agent now calls read-learning.sh before coding, and the
retrospective agent calls write-learning.sh to persist tactical knowledge.
Harness-level escalations (3+ similar learnings → convention) still write
to the local case repo.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
nicknisi added a commit that referenced this pull request Mar 17, 2026
- Remove double-written metrics: drop legacy log-run.sh call from
  pipeline.ts, writer.ts is the single path (#1)
- Await retrospective instead of fire-and-forget so process.exit
  doesn't kill it (#3)
- Fix README claiming retrospective applies changes directly — it
  proposes amendments, only learnings are applied directly (#6)
- Document attended-mode retry semantics: maxRetries is per-attempt,
  human can re-enter implement indefinitely (#2)
- Reconcile --worktree docs: removed from flag list, branch isolation
  is handled by skill layer before orchestrator dispatch (#4)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant