Skip to content
92 changes: 92 additions & 0 deletions dev/commands/feedback.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Session Feedback

Analyze this conversation and identify what documentation or context was missing, incomplete, or incorrect. The goal is to continuously improve the project's knowledge base so future conversations are more efficient.

## Step 1: Session Analysis

Reflect on the work done in this conversation. For each area, identify friction points:

1. **Exploration overhead**: What parts of the codebase did you have to discover by searching that should have been documented? (e.g., patterns, conventions, module responsibilities)
2. **Wrong assumptions**: Did you make incorrect assumptions due to missing or misleading documentation?
3. **Repeated patterns**: Did you discover recurring patterns or conventions that aren't documented anywhere?
4. **Missing context**: What background knowledge would have helped you start faster? (e.g., architecture decisions, data flow, naming conventions)
5. **Tooling gaps**: Were there commands, scripts, or workflows that you had to figure out?

## Step 2: Documentation Audit

For each friction point identified, determine the appropriate fix. Check the existing documentation to avoid duplicating what's already there:

- `AGENTS.md` — Top-level project instructions and component map
- `CLAUDE.md` — Entry point referencing AGENTS.md
- `docs/AGENTS.md` — Documentation site guide
- `infrahub_sdk/ctl/AGENTS.md` — CLI development guide
- `infrahub_sdk/pytest_plugin/AGENTS.md` — Pytest plugin guide
- `tests/AGENTS.md` — Testing guide

Read the relevant existing files to understand what's already documented before proposing changes.

## Step 3: Generate Report

Present the feedback as a structured report with the following sections. Only include sections that have content — skip empty sections.

### Format

```markdown
## Session Feedback Report

### What I Was Working On
<!-- Brief summary of the task(s) performed in this conversation -->

### Documentation Gaps
<!-- Things that should be documented but aren't -->

For each gap:

- **Topic**: What's missing
- **Where**: Which file should contain this (existing file to update, or new file to create)
- **Why**: How this would have helped during this conversation
- **Suggested content**: A draft of what should be added (be specific and actionable)

### Documentation Corrections
<!-- Things that are documented but wrong or misleading -->

For each correction:

- **File**: Path to the file
- **Issue**: What's wrong or misleading
- **Fix**: What it should say instead

### Discovered Patterns
<!-- Conventions or patterns found in the code that aren't documented -->

For each pattern:

- **Pattern**: Description of the convention
- **Evidence**: Where in the code this pattern is used (file paths)
- **Where to document**: Which AGENTS.md or guide file should capture this

### Memory Updates
<!-- Propose additions/changes to MEMORY.md for cross-session persistence -->

For each update:

- **Action**: Add / Update / Remove
- **Content**: What to write
- **Reason**: Why this is worth remembering across sessions
```

## Step 4: Apply Changes

After presenting the report, ask the user which changes they want to apply. Present the options:

1. **Apply all** — Create/update all proposed documentation files and memory
2. **Cherry-pick** — Let the user select which changes to apply
3. **None** — Just keep the report as reference, don't modify any files


For approved changes:

- Edit existing files when updating documentation
- Create new files only when no appropriate existing file exists
- Update `MEMORY.md` with approved memory changes
- Keep all changes minimal and focused — don't over-document
36 changes: 36 additions & 0 deletions dev/commands/pre-ci.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
Run a subset of fast CI checks locally. These are lightweight validations that catch common issues before pushing. Run all steps and report a summary at the end.

## Steps

1. **Format** Python code:
```bash
uv run invoke format
```

2. **Lint** (YAML, Ruff, ty, mypy, markdownlint, vale):
```bash
uv run invoke lint
```

3. **Python unit tests**:
```bash
uv run pytest tests/unit/
```

4. **Docs unit tests** (vitest):
```bash
(cd docs && npx --no-install vitest run)
```

5. **Validate generated documentation** (regenerate and check for drift):
```bash
uv run invoke docs-validate
```

## Instructions

- Run each step in order using the Bash tool.
- If a step fails, continue with the remaining steps.
- At the end, print a summary table of all steps with pass/fail status.
- Do NOT commit or push anything.