Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 139 additions & 0 deletions .claude/commands/stackshift.diff.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
---
name: stackshift.diff
description: Compare specifications between directories or git commits to visualize what changed. Useful for PR reviews and tracking spec evolution.
---

# Spec Diff

Compare specifications between two locations to see what changed.

## What This Command Does

This command compares specifications and shows:

1. **Added specs** - New specifications that didn't exist before
2. **Removed specs** - Specifications that were deleted
3. **Modified specs** - Specifications with changed content
4. **Section-level changes** - Which sections were added, removed, or modified

## Usage Patterns

### Compare with Previous Git Commit

Compare current specs with the previous commit:

```bash
# Create temp directory with previous commit's specs
git show HEAD~1:.specify/memory/specifications/ > /tmp/old-specs/

# Then compare
/stackshift.diff /tmp/old-specs .specify/memory/specifications
```

### Compare Two Branches

```bash
# Checkout base branch specs to temp
git show main:.specify/memory/specifications/ > /tmp/main-specs/

# Compare with current branch
/stackshift.diff /tmp/main-specs .specify/memory/specifications
```

### Compare Two Repositories

```bash
/stackshift.diff ~/git/legacy-app ~/git/new-app
```

## Output Format

```markdown
# Specification Diff Report

**Generated:** 2024-11-24 10:30:00
**Base:** /path/to/base
**Compare:** /path/to/compare

## Summary

| Status | Count |
|--------|-------|
| ➕ Added | 2 |
| ➖ Removed | 1 |
| 📝 Modified | 3 |
| ✅ Unchanged | 10 |
| **Total** | **16** |

## Changes

### ➕ user-mfa
**File:** `user-mfa.md`
**Status:** added

New specification added: Multi-factor Authentication

### 📝 user-authentication
**File:** `user-authentication.md`
**Status:** modified

**Section Changes:**

- `+` **Session Management**
- + Session timeout increased to 60 minutes
- + Added refresh token support

- `~` **Acceptance Criteria**
- + Must support MFA verification
- - Legacy OAuth 1.0 support removed

### ➖ legacy-login
**File:** `legacy-login.md`
**Status:** removed

Specification removed: Legacy Login System
```

## When to Use

1. **PR Reviews** - See what specs changed in a pull request
2. **Migration Tracking** - Compare legacy and new app specs
3. **Audit Trail** - Track how specs evolved over time
4. **Sync Verification** - Ensure specs are in sync across repos

## Integration with CI/CD

The GitHub Actions workflow (`.github/workflows/spec-alignment.yml`) automatically runs spec diff on PRs and posts a comment with the changes.

---

## Execute Diff

To compare specifications, I need two directories:

1. **Base directory** - The "before" state (e.g., main branch, legacy app)
2. **Compare directory** - The "after" state (e.g., feature branch, new app)

**Please provide the two directories to compare:**

```
Base: [path to base specs]
Compare: [path to compare specs]
```

Or specify a git ref to compare against:

```
Compare current specs with: [git ref like HEAD~1, main, v1.0.0]
```

**For git comparison, I'll:**
1. Create a temporary directory
2. Extract specs from the git ref
3. Compare with current `.specify/` directory
4. Show the diff report

**Example:**
```
Compare current specs with HEAD~1
```
155 changes: 155 additions & 0 deletions .claude/commands/stackshift.quality.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
---
name: stackshift.quality
description: Analyze specification quality and get scores on completeness, testability, and clarity. Provides actionable feedback for improving specs.
---

# Spec Quality Analysis

Analyze the quality of specifications in this project and provide actionable feedback.

## What This Command Does

This command analyzes all specifications in the `.specify/` directory and scores them on:

1. **Completeness** (35% weight)
- Are all required sections present? (Feature, User Stories, Acceptance Criteria, Technical Requirements)
- Are recommended sections included? (Dependencies, Out of Scope, Implementation Notes)
- Are there unresolved `[NEEDS CLARIFICATION]` or `TODO` markers?

2. **Testability** (35% weight)
- Are acceptance criteria measurable? (specific numbers, timeouts, status codes)
- Do criteria use definitive language? (must, shall, will)
- Are there vague criteria? (should be good, performant, seamless)

3. **Clarity** (30% weight)
- Is the language unambiguous? (avoiding: appropriate, reasonable, some, various)
- Are sentences concise? (< 40 words)
- Are examples provided? (code blocks, Given/When/Then)

## Process

1. **Find specifications** in `.specify/memory/specifications/` or alternative locations
2. **Analyze each spec** for completeness, testability, and clarity
3. **Calculate scores** (0-100) for each dimension
4. **Identify issues** (errors, warnings, info)
5. **Generate suggestions** for improvement
6. **Output report** with visual score bars

## Output Format

```
# Specification Quality Report

**Generated:** 2024-11-24 10:30:00
**Project:** /path/to/project

## Summary

Overall Score: ████████░░ 80/100
Completeness: █████████░ 90/100
Testability: ███████░░░ 70/100
Clarity: ████████░░ 80/100

**Specs Analyzed:** 5
**Issues:** 2 errors, 5 warnings, 3 info

## Specifications

### ✅ user-authentication (92/100)
| Metric | Score |
|--------|-------|
| Completeness | 95/100 |
| Testability | 90/100 |
| Clarity | 90/100 |

### ⚠️ payment-processing (65/100)
| Metric | Score |
|--------|-------|
| Completeness | 70/100 |
| Testability | 55/100 |
| Clarity | 70/100 |

**Issues:**
- ⚠️ Found 3 "[NEEDS CLARIFICATION]" markers
- ⚠️ Vague or untestable criteria: "should be fast and responsive..."

**Suggestions:**
- 💡 Make 3 criteria more specific with measurable values
- 💡 Consider adding "Testing Strategy" section
```

## Scoring Thresholds

| Score | Rating | Action |
|-------|--------|--------|
| 80-100 | ✅ Good | Ready for implementation |
| 60-79 | ⚠️ Needs Work | Address warnings before implementing |
| 0-59 | ❌ Poor | Significant revision needed |

## Example Usage

**After running StackShift Gear 3 (Create Specs):**
```
/stackshift.quality
```

**Before implementing a feature:**
```
/stackshift.quality
# Review suggestions
# Fix issues in specs
# Re-run to verify improvement
```

## Integration with CI/CD

Copy `.github/workflows/spec-alignment.yml` to your repository to automatically check spec quality on PRs:

```yaml
- Runs spec quality analysis
- Posts report as PR comment
- Fails PR if score < 60 or errors > 0
```

---

## Execute Analysis

Now analyzing specification quality in the current directory...

```bash
# Find specs directory
SPEC_DIR=""
if [ -d ".specify/memory/specifications" ]; then
SPEC_DIR=".specify/memory/specifications"
elif [ -d ".specify/specifications" ]; then
SPEC_DIR=".specify/specifications"
elif [ -d "specs" ]; then
SPEC_DIR="specs"
fi

if [ -z "$SPEC_DIR" ]; then
echo "❌ No specifications directory found!"
echo "Expected: .specify/memory/specifications/, .specify/specifications/, or specs/"
exit 0
fi

echo "📂 Found specs in: $SPEC_DIR"
ls -la "$SPEC_DIR"/*.md 2>/dev/null || echo "No .md files found"
```

**Analyze each specification file for:**

1. **Required sections** - Check for `## Feature`, `## User Stories`, `## Acceptance Criteria`, `## Technical Requirements`

2. **Incomplete markers** - Search for `[NEEDS CLARIFICATION]`, `[TODO]`, `[TBD]`, `[PLACEHOLDER]`

3. **Testable criteria** - Look for specific numbers, timeouts, status codes in acceptance criteria

4. **Ambiguous language** - Flag words like: appropriate, reasonable, adequate, sufficient, good, fast, various, some

5. **Long sentences** - Flag sentences > 40 words

6. **Code examples** - Bonus for having code blocks or Given/When/Then format

Generate a comprehensive quality report with scores and actionable suggestions for each specification.
Loading
Loading