Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions .claude/rules/development-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ If you think any of these, STOP and check for skills:

**Project Skills** (defined in `.claude/skills/`):
- `prd-construction` - Templates and methodology for creating PRDs and implementation plans
- `unit-test-writing` - Edge case checklist, test structure patterns, coverage workflow

**Managed Skills** (from superpowers plugin - install via `claude plugin add superpowers-marketplace`):
- `superpowers:brainstorming` - Refine ideas into designs through collaborative questioning
Expand Down Expand Up @@ -123,6 +124,36 @@ No Issues → Proceed to Verification
- Create PR against `develop` branch
- Always use squash merge with branch deletion (`gh pr merge --squash --delete-branch`)

#### Build Verification Summary Format

After running builds with quality checks, provide a scannable summary:

```text
Build verified successfully:
- ✅ Tests: 56 passed, 0 failed
- ✅ SpotBugs: 0 bugs, 0 errors
- ✅ PMD: 0 violations, 97 warnings
- ✅ Checkstyle: 0 violations, 5 warnings
- ✅ Coverage: 65% (target: 60%)
```

**Guidelines:**
- Use ✅ for passing checks (no blocking errors/violations), ❌ for failures
- Use ⚠️ for coverage below target (build succeeds but coverage warning)
- Always report both errors/violations AND warnings for each tool
- Add brief context for notable items (e.g., "import order fixed")
- Report failures clearly so they can be addressed before proceeding

**Example with failures:**
```text
Build failed:
- ❌ Tests: 54 passed, 2 failed
- ❌ SpotBugs: 2 bugs (null pointer issues), 0 errors
- ✅ PMD: 0 violations, 45 warnings
- ❌ Checkstyle: 3 violations (missing Javadoc), 12 warnings
- ⚠️ Coverage: 58% (target: 60%)
```

### Workflow Summary
```
GitHub issue/prompt
Expand Down
41 changes: 41 additions & 0 deletions .claude/rules/unit-testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Unit Testing Standards

## Core Principles

### What NOT to Test

**Never test trivial code:**
- Getters and setters (no logic = no test)
- Simple constructors that only assign fields
- Delegation methods that just call another method
- Framework-generated code (Lombok, records, etc.)

**Why:** Tests should verify behavior, not structure. Trivial tests add maintenance burden without catching bugs.

### What TO Test

**Always test:**
- Business logic and calculations
- Validation and error handling
- State changes and side effects
- Integration points and boundaries
- Edge cases and corner cases

### Prioritize Edge Cases

Every test class should prioritize edge case coverage over happy paths:

- **Boundary conditions** - empty collections, null inputs, zero values, max values
- **Error paths** - invalid inputs, missing required data, exception scenarios
- **State transitions** - before/after states, partial completion, rollback scenarios
- **Combinations** - multiple flags, conflicting options, compound conditions

## Mandatory Coverage Evaluation

**When modifying existing code, you MUST evaluate and expand test coverage.**

1. Check if tests exist for the code path you're touching
2. Identify missing edge cases
3. Add tests before completing the work

Use the `unit-test-writing` skill for the detailed workflow.
158 changes: 158 additions & 0 deletions .claude/skills/unit-test-writing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
---
name: unit-test-writing
description: Use when writing or expanding unit tests - provides edge case checklist, test structure patterns, and coverage workflow
---

# Unit Test Writing Skill

## When to Use

- Writing new unit tests
- Expanding test coverage on existing code
- Reviewing test completeness

## Edge Case Checklist

**Create TodoWrite items for each applicable category:**

| Category | Examples | Check |
|----------|----------|-------|
| Empty/Null | Empty string, null reference, empty collection | ☐ |
| Boundaries | Zero, negative, max int, min int, boundary values | ☐ |
| Invalid | Wrong type, malformed input, out of range | ☐ |
| Missing | Required field absent, partial data | ☐ |
| Duplicates | Repeated values, duplicate keys | ☐ |
| Ordering | First, last, middle, unsorted | ☐ |
| Concurrency | Race conditions, thread safety (if applicable) | ☐ |
| State | Uninitialized, already processed, closed | ☐ |
| Combinations | Multiple flags, conflicting options | ☐ |

## Coverage Expansion Workflow

When touching existing code:

### Step 1: Assess Current Coverage

```bash
# Check if tests exist
find . -name "*Test.java" | xargs grep -l "ClassName"

# Run with coverage (if configured)
mvn -pl module test jacoco:report
```

### Step 2: Identify Gaps

Review the code and ask:
- What error conditions aren't tested?
- What boundary values aren't tested?
- What edge cases are missing?

### Step 3: Create Test Plan

Use TodoWrite to track each test to add:

```text
- [ ] Test: returns error when input is null
- [ ] Test: returns error when input is empty
- [ ] Test: handles maximum allowed value
- [ ] Test: throws on negative input
```

### Step 4: Write Tests

For each test, follow this structure:

```java
@Test
@DisplayName("descriptive name of scenario")
void descriptiveMethodName() {
// Arrange - set up test data and dependencies

// Act - execute the code under test

// Assert - verify the expected outcome
}
```

## Test Naming Convention

Use names that describe the scenario and expected outcome:

```java
// Pattern: [action]When[condition] or [expectedResult]When[condition]

// Good
void returnsErrorWhenRequiredArgumentMissing()
void throwsOnMultipleInvalidOptions()
void returnsEmptyWhenNoTargetCommand()

// Bad
void testValidate()
void test1()
void testMethod()
```

## Test Fixture Patterns

### Focused Fixtures

Create dedicated test classes for specific scenarios:

```java
/**
* A test command that requires an extra argument.
*/
class TestCommandWithRequiredArg extends AbstractTerminalCommand {
// Focused on one scenario
}
```

### Helper Methods

Create reusable setup methods:

```java
@NonNull
private CallingContext createContext(@NonNull String... args) {
return new CallingContext(processor, Arrays.asList(args));
}
```

### Nested Test Classes

Group related tests with `@Nested`:

```java
@Nested
@DisplayName("validateExtraArguments()")
class ValidateExtraArgumentsTests {

@Test
@DisplayName("returns empty when no target command")
void returnsEmptyWhenNoTargetCommand() { }

@Test
@DisplayName("returns error when required argument missing")
void returnsErrorWhenRequiredArgumentMissing() { }
}
```

## Integration with TDD

When following TDD, prioritize tests in this order:

1. **Edge cases first** - They reveal design issues early
2. **Error/failure cases** - Often undertested
3. **Happy path last** - Most obvious case

## Quick Reference

| Do | Don't |
|----|-------|
| Test edge cases thoroughly | Test getters/setters |
| Test error paths | Test trivial constructors |
| Use descriptive test names | Use vague names like `test1` |
| Create focused test fixtures | Create god-object test utilities |
| Group related tests with `@Nested` | Mix unrelated tests in one class |
| Use `@DisplayName` for clarity | Rely only on method names |
Loading