Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .claude/rules/vmcp-anti-patterns.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,19 @@ Storing a mutable struct in context and having multiple middleware modify it in
**Detect**: Middleware mutating fields on structs retrieved from context; structs stored in context with exported mutable fields; multiple middleware reading and writing the same context value.

**Instead**: Treat context values as immutable; create new values with `context.WithValue` if downstream needs to add info. Better yet, pass data explicitly (see #1).

## 8. Unnecessary Abstraction / Interface Modification

Introducing new abstractions (caches, wrapper types, new interface methods) or modifying stable interfaces to accommodate a single implementation's concern. A stable interface being modified is a sign that implementation details are leaking across boundaries.

**Detect**: New interface methods added to satisfy one implementation; wrapper types that add a layer but don't meaningfully change behavior; caches where every "hit" still requires a remote call; new abstractions without evidence (profiling, incidents) justifying the complexity; stable interfaces gaining methods that only one consumer needs.

**Instead**: Solve the concern internally to the component that needs it — don't push implementation-specific concerns onto shared interfaces. Start with the simplest approach and add abstraction only when there is concrete evidence it's needed.

## 9. Premature Optimization

Adding caches, connection pools, or other performance optimizations without evidence that the unoptimized path is a problem. These add complexity (invalidation logic, staleness risks, lifecycle management) that must be maintained regardless of whether the optimization provides measurable benefit.

**Detect**: Caches introduced without profiling data or load estimates showing the uncached path is too slow; connection pools or object pools where the allocation cost hasn't been measured; complexity added to avoid overhead (e.g., TLS handshakes, serialization) at request rates where the overhead is negligible.

**Instead**: Start with the straightforward implementation. Measure under realistic load. Add optimization only when measurements show it's needed, and document the evidence in the commit or PR description.
83 changes: 83 additions & 0 deletions .claude/skills/code-review-assist/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
---
name: code-review-assist
description: Augments human code review by summarizing changes, surfacing key review questions, assessing test coverage, and identifying low-risk sections. Use when reviewing a diff, PR, or code snippet as a senior review partner.
---

# Code Review Augmentation

## Purpose

Act as a senior review partner — not a replacement reviewer. Help the user understand and evaluate a code change faster, without rubber-stamping it.

## How This Differs from the `code-reviewer` Agent

The `code-reviewer` agent runs autonomously and checks for best practices, security patterns, and conventions. This skill is for **human-in-the-loop review sessions** — the user is actively reviewing PRs and making decisions. Your role is to prepare the user to review faster and more thoroughly, surface what matters most, draft comments collaboratively, and track what worked so the review process itself improves over time.

## Instructions

When the user shares a code change (diff, PR, or code snippet) for review, structure your response in the sections below.

### 1. Change Summary

In 2-4 sentences, explain what this change does and why it appears to exist. State the apparent intent plainly. If the intent is unclear, say so — that's a review finding in itself.

### 2. Key Review Questions

Surface the 2-5 most important questions the reviewer should be asking about this change. Focus on:

- **Justification**: Is the problem this solves clear? Is this the right time/place to solve it?
- **Approach fit**: Could this be solved more simply? Are there obvious alternative approaches with better tradeoffs? If so, briefly sketch them.
- **Complexity cost**: Does this change add abstractions, indirection, new dependencies, or conceptual overhead that may not be justified? Flag anything that makes the codebase harder to reason about.
- **Boundary concerns**: Does this change respect existing module/service boundaries, or does it blur them?
- **Necessity**: Is this the simplest approach that solves the problem? If the change introduces new interfaces, modifies stable interfaces, adds caches, or creates new abstraction layers — challenge it. A stable interface being modified to accommodate one implementation is a sign that concerns are leaking across boundaries. Ask: can this be solved internally to the component that needs it? Is there evidence (profiling, incidents) justifying the added complexity, or should we start simpler?
- **Premature optimization**: Does the change add caches, pools, or other performance machinery without evidence the unoptimized path is a problem? Optimizations add maintenance cost (invalidation, staleness, lifecycle management) regardless of whether they provide measurable benefit. Ask: has the straightforward approach been measured under realistic load?

### 3. Testing Assessment

Evaluate whether the change is well-tested relative to its risk:

- Are the important behaviors covered?
- Are edge cases and failure modes addressed?
- Are tests testing the right thing (behavior, not implementation details)?
- If tests are missing or weak, say specifically what should be tested.
- For validation or branching logic, enumerate the full input matrix (type × field combinations, flag × state permutations) and verify each cell is covered. Don't eyeball — be systematic.

### 4. vMCP Anti-Pattern Check

If the change touches files under `pkg/vmcp/` or `cmd/vmcp/`, also run the `vmcp-review` skill against those files. Don't reproduce the full vmcp-review report — instead, summarize the most important findings (must-fix and should-fix severity) inline with your Key Review Questions. Link back to the specific anti-pattern by number (e.g., "see vMCP anti-pattern #8") so the reviewer can dig deeper if needed.

### 5. Things That Look Fine

Briefly note which parts of the change appear straightforward and low-risk so the reviewer can skim those confidently.

### 6. Reading Order (large changes only)

If the change is large, suggest a reading order — which files/sections to review carefully vs. skim.

## Review Session Tracking

When reviewing multiple PRs in a session, maintain a local file (`review-session-notes.md`) that documents what happened for each PR:

1. **After the user leaves comments or makes a decision**, record:
- What the skill surfaced vs. what the user actually commented on
- Where the skill's output aligned with the user's review
- Where the skill missed something the user caught, or flagged something the user didn't care about
- Whether the user had to arrive at the key insight through discussion rather than the initial review output

2. **At the end of the session** (or when the user asks to reflect), analyze the notes for patterns:
- Recurring gaps — types of issues the skill consistently misses
- False priorities — things the skill flags that the user consistently skips
- Discussion-dependent insights — conclusions the user reached through back-and-forth that the skill should surface directly
- Propose concrete updates to this skill, the vmcp-review skill, or `.claude/rules/` files based on what was learned

The goal is continuous improvement: each review session should make the next one more efficient.

## Principles

- Never say "LGTM" or give a blanket approval. Surface what the human reviewer should think about, not the decision itself.
- Don't waste the reviewer's time on style nits, formatting, or naming unless it genuinely hurts readability. Assume linters handle that.
- Prioritize findings. Lead with whatever carries the most risk or warrants the most thought.
- Be direct. Say "this adds complexity that may not be justified" rather than hedging with "you might want to consider..."
- When suggesting alternatives, be concrete enough to evaluate but brief — a sentence or two, not a full implementation.
- Question the premise, not just the implementation. Don't accept that an abstraction, cache, or optimization should exist and then review its quality — first ask whether it should exist at all. The highest-value review feedback often eliminates complexity rather than improving it.
- If you lack context (e.g., you don't know the broader system), say what assumptions you're making and what context would change your assessment.
Loading