Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 102 additions & 0 deletions .agents/skills/ci-prep/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
name: ci-prep
description: Prepares the current branch for CI by running the exact same steps locally and fixing issues. If CI is already failing, fetches the GH Actions logs first to diagnose. Use before pushing, when CI is red, or when the user says "fix ci".
argument-hint: "[--failing] [optional job name to focus on]"
---

# CI Prep

Prepare the current state for CI. If CI is already failing, fetch and analyze the logs first.

## Arguments

- `--failing` — Indicates a GitHub Actions run is already failing. When present, you MUST execute **Step 1** before doing anything else.
- Any other argument is treated as a job name to focus on (but all failures are still reported).

If `--failing` is NOT passed, skip directly to **Step 2**.

## Step 1 �� Fetch failed CI logs (only when `--failing`)

You MUST do this before any other work.

```bash
BRANCH=$(git branch --show-current)
PR_JSON=$(gh pr list --head "$BRANCH" --state open --json number,title,url --limit 1)
```

If the JSON array is empty, **stop immediately**:
> No open PR found for branch `$BRANCH`. Create a PR first.

Otherwise fetch the logs:

```bash
PR_NUMBER=$(echo "$PR_JSON" | jq -r '.[0].number')
gh pr checks "$PR_NUMBER"
RUN_ID=$(gh run list --branch "$BRANCH" --limit 1 --json databaseId --jq '.[0].databaseId')
gh run view "$RUN_ID"
gh run view "$RUN_ID" --log-failed
```

Read **every line** of `--log-failed` output. For each failure note the exact file, line, and error message. If a job name argument was provided, prioritize that job but still report all failures.

## Step 2 — Analyze the CI workflow

1. Read `.github/workflows/ci.yml` completely. Parse every job and every step.
2. Extract the ordered list of commands the CI actually runs.
3. Note environment variables, matrix strategies, conditional steps, and service containers.

**Do NOT assume the steps are `make lint`, `make test`, `make build`.** Extract what the CI *actually does*.

## Step 3 — Run each CI step locally, in order

Work through failures in this priority order:

1. **Formatting** — run auto-formatters first to clear noise
2. **Compilation errors** — must compile before lint/test
3. **Lint violations** — fix the code pattern
4. **Runtime / test failures** — fix source code to satisfy the test

For each command extracted from the CI workflow:

1. Run the command exactly as CI would run it.
2. If the step fails, **stop and fix the issues** before continuing to the next step.
3. After fixing, re-run the same step to confirm it passes.
4. Move to the next step only after the current one succeeds.

### Hard constraints

- **NEVER modify test files** — fix the source code, not the tests
- **NEVER add suppressions** (`#pragma warning disable`, `#[allow(...)]`, `// eslint-disable`)
- **NEVER delete or ignore failing tests**
- **NEVER remove assertions**

If stuck on the same failure after 5 attempts, ask the user for help.

## Step 4 — Loop

- Go back to the first step and repeat until all steps pass locally. If `--failing`, you should see the exact same errors in your terminal that CI shows in the logs. Fix those errors until they are resolved.

## Step 5 — Commit/Push (only when `--failing`)

Once all CI steps pass locally:

1. Commit, but DO NOT MARK THE COMMIT WITH YOU AS AN AUTHOR!!! Only the user authors the commit!
2. Push
3. Monitor until completion or failure
4. Upon failure, go back to Step 1

## Rules

- *You are not allowed to commi/push until all tests pass*. Do not waste GitHub action minutes! The local CI must prove that everything is working.
- **Always read the CI workflow first.** Never assume what commands CI runs.
- Do not push if any step fails (unless `--failing` and all steps now pass)
- Fix issues found in each step before moving to the next
- Never skip steps or suppress errors
- If the CI workflow has multiple jobs, run all of them (respecting dependency order)
- Skip steps that are CI-infrastructure-only (checkout, setup actions, cache steps, artifact uploads) — focus on the actual build/test/lint commands

## Success criteria

- Every command that CI runs has been executed locally and passed
- All fixes are applied to the working tree
- The CI passes successfully (if you are correcting an existing failure)
106 changes: 106 additions & 0 deletions .agents/skills/code-dedup/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
---
name: code-dedup
description: Searches for duplicate code, duplicate tests, and dead code, then safely merges or removes them. Use when the user says "deduplicate", "find duplicates", "remove dead code", "DRY up", or "code dedup". Requires test coverage — refuses to touch untested code.
---

# Code Dedup

Carefully search for duplicate code, duplicate tests, and dead code across the repo. Merge duplicates and delete dead code — but only when test coverage proves the change is safe.

## Prerequisites — hard gate

Before touching ANY code, verify these conditions. If any fail, stop and report why.

1. Run `make test` — all tests must pass. If tests fail, stop. Do not dedup a broken codebase.
2. Run `make coverage-check` — coverage must meet the repo's threshold. If it doesn't, stop.
3. This repo uses **C#, F#, Rust, and TypeScript** — all statically typed. Proceed.

## Steps

Copy this checklist and track progress:

```
Dedup Progress:
- [ ] Step 1: Prerequisites passed (tests green, coverage met, typed)
- [ ] Step 2: Dead code scan complete
- [ ] Step 3: Duplicate code scan complete
- [ ] Step 4: Duplicate test scan complete
- [ ] Step 5: Changes applied
- [ ] Step 6: Verification passed (tests green, coverage stable)
```

### Step 1 — Inventory test coverage

Before deciding what to touch, understand what is tested.

1. Run `make test` and `make coverage-check` to confirm green baseline
2. Note the current coverage percentage — this is the floor. It must not drop.
3. Identify which files/modules have coverage and which do not. Only files WITH coverage are candidates for dedup.

### Step 2 — Scan for dead code

Search for code that is never called, never imported, never referenced.

1. Look for unused exports, unused functions, unused records, unused variables
2. Use language-appropriate tools:
- **C#/F#:** Analyzer warnings for unused members (build with `-warnaserror` catches these)
- **Rust:** The compiler already warns on dead code — check `make lint` output
- **TypeScript:** Check for unexported functions with zero references in `Lql/LqlExtension/`
3. For each candidate: **grep the entire codebase** for references (including tests, scripts, configs). Only mark as dead if truly zero references.
4. List all dead code found with file paths and line numbers. Do NOT delete yet.

### Step 3 — Scan for duplicate code

Search for code blocks that do the same thing in multiple places.

1. Look for functions/methods with identical or near-identical logic
2. Look for copy-pasted blocks (same structure, maybe different variable names)
3. Look for multiple implementations of the same algorithm or pattern
4. Check across module boundaries — duplicates often hide in different projects (DataProvider, Lql, Sync, Gatekeeper, Samples)
5. For each duplicate pair: note both locations, what they do, and how they differ (if at all)
6. List all duplicates found. Do NOT merge yet.

### Step 4 — Scan for duplicate tests

Search for tests that verify the same behavior.

1. Look for test functions with identical assertions against the same code paths
2. Look for test fixtures/helpers that are duplicated across test files
3. Look for integration tests that fully cover what a unit test also covers (keep the integration test, mark the unit test as redundant per CLAUDE.md rules)
4. List all duplicate tests found. Do NOT delete yet.

### Step 5 — Apply changes (one at a time)

For each change, follow this cycle: **change -> test -> verify coverage -> continue or revert**.

#### 5a. Remove dead code
- Delete dead code identified in Step 2
- After each deletion: run `make test` and `make coverage-check`
- If tests fail or coverage drops: **revert immediately** and investigate

#### 5b. Merge duplicate code
- For each duplicate pair: extract the shared logic into a single function/module
- Update all call sites to use the shared version
- After each merge: run `make test` and `make coverage-check`
- If tests fail: **revert immediately**. The duplicates may have subtle differences you missed.

#### 5c. Remove duplicate tests
- Delete the redundant test (keep the more thorough one)
- After each deletion: run `make coverage-check`
- If coverage drops: **revert immediately**. The "duplicate" test was covering something the other wasn't.

### Step 6 — Final verification

1. Run `make test` — all tests must still pass
2. Run `make coverage-check` — coverage must be >= the baseline from Step 1
3. Run `make lint` and `make fmt-check` — code must be clean
4. Report: what was removed, what was merged, final coverage vs baseline

## Rules

- **No test coverage = do not touch.** If a file has no tests covering it, leave it alone entirely.
- **Coverage must not drop.** The coverage floor from Step 1 is sacred.
- **One change at a time.** Make one dedup change, run tests, verify coverage. Never batch multiple dedup changes before testing.
- **When in doubt, leave it.** If two code blocks look similar but you're not 100% sure they're functionally identical, leave both.
- **Preserve public API surface.** Do not change function signatures, record names, or module exports that external code depends on. Internal refactoring only.
- **Three similar lines is fine.** Only dedup when the shared logic is substantial (>10 lines) or when there are 3+ copies.
69 changes: 69 additions & 0 deletions .agents/skills/spec-check/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
name: spec-check
description: Audits spec/plan documents against the codebase to ensure every spec section has implementing code and tests. Use when the user says "check specs", "audit specs", "spec coverage", or "validate specs".
---
<!-- agent-pmo:d75d5c8 -->

# Spec Check

Audit spec and plan documents against the codebase.

## Steps

### Step 1 — Validate spec ID structure

For every markdown file in `docs/specs/`:
1. Find all headings that contain a spec ID (pattern: `[GROUP-TOPIC-DETAIL]`)
2. Validate each ID:
- MUST be uppercase, hyphen-separated
- MUST NOT contain sequential numbers (e.g., `[SPEC-001]` is ILLEGAL)
- First word is the **group** — all sections sharing the same group MUST be adjacent
3. Check for duplicate IDs across all spec files
4. Report any violations

### Step 2 — Find spec documents

Scan `docs/specs/` and `docs/plans/` for all markdown files. For each file:
1. Extract all spec section IDs
2. Build a map: `spec ID → file path + heading`

### Step 3 — Check code references

For each spec ID found in Step 2:
1. Search the entire codebase (C#, Rust, TypeScript, F# files) for references to the ID
2. A reference is any comment containing the spec ID (e.g., `// Implements [AUTH-TOKEN-VERIFY]`)
3. Record which files reference each spec ID

### Step 4 — Check test references

For each spec ID:
1. Search test files for references to the ID
2. A test reference is a comment like `// Tests [AUTH-TOKEN-VERIFY]` in a test file

### Step 5 — Verify code logic matches spec

For spec IDs that DO have code references:
1. Read the spec section
2. Read the implementing code
3. Check that the code actually does what the spec describes
4. Flag any discrepancies

### Step 6 — Report

Output a table:

| Spec ID | Spec File | Code References | Test References | Status |
|---------|-----------|-----------------|-----------------|--------|

Status values:
- **COVERED** — has both code and test references
- **UNTESTED** — has code references but no test references
- **UNIMPLEMENTED** — has no code references at all
- **ORPHANED** — spec ID found in code but not in any spec document

## Rules

- Never modify spec documents — only report findings
- Never modify code — only report findings
- Every spec section MUST have at least one code reference and one test reference
- Orphaned references (code mentioning a spec ID that doesn't exist) are errors
36 changes: 36 additions & 0 deletions .agents/skills/submit-pr/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
name: submit-pr
description: Creates a pull request with a well-structured description after verifying CI passes. Use when the user asks to submit, create, or open a pull request.
disable-model-invocation: true
---

# Submit PR

Create a pull request for the current branch with a well-structured description.

## Steps

1. Run `make ci` — must pass completely before creating PR
2. **Generate the diff against main.** Run `git diff main...HEAD > /tmp/pr-diff.txt` to capture the full diff between the current branch and the head of main. This is the ONLY source of truth for what the PR contains. **Warning:** the diff can be very large. If the diff file exceeds context limits, process it in chunks (e.g., read sections with `head`/`tail` or split by file) rather than trying to load it all at once.
3. **Derive the PR title and description SOLELY from the diff.** Read the diff output and summarize what changed. Ignore commit messages, branch names, and any other metadata — only the actual code/content diff matters.
4. Write PR body using the template in `.github/pull_request_template.md`
5. Fill in (based on the diff analysis from step 3):
- TLDR: one sentence
- What Was Added: new files, features, deps
- What Was Changed/Deleted: modified behaviour
- How Tests Prove It Works: specific test names or output
- Spec/Doc Changes: if any
- Breaking Changes: yes/no + description
6. Use `gh pr create` with the filled template

## Rules

- Never create a PR if `make ci` fails
- PR description must be specific and tight — no vague placeholders
- Link to the relevant GitHub issue if one exists

## Success criteria

- `make ci` passed
- PR created with `gh pr create`
- PR URL returned to user
57 changes: 57 additions & 0 deletions .agents/skills/upgrade-packages/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
name: upgrade-packages
description: Upgrades all dependencies to latest versions across C#, Rust, and TypeScript. Use when the user says "upgrade packages", "update dependencies", "bump versions", or "upgrade deps".
argument-hint: "[language: dotnet|rust|typescript|all]"
---
<!-- agent-pmo:d75d5c8 -->

# Upgrade Packages

Upgrade all dependencies to their latest versions.

## Steps

### Step 1 — Detect packages to upgrade

Based on `$ARGUMENTS` (default: all):

**C# (.NET):**
- Check `Directory.Build.props` for centrally managed package versions
- Check individual `.csproj` files for project-specific packages
- Run `dotnet list package --outdated` on `DataProvider.sln`

**Rust:**
- Check `Lql/lql-lsp-rust/Cargo.toml` workspace dependencies
- Run `cd Lql/lql-lsp-rust && cargo outdated` (install with `cargo install cargo-outdated` if needed)

**TypeScript:**
- Check `Lql/LqlExtension/package.json`
- Run `cd Lql/LqlExtension && npm outdated`

### Step 2 — Upgrade

**C# (.NET):**
- Update version numbers in `Directory.Build.props` for central packages
- For project-specific packages: `dotnet add <project> package <name>`
- Run `dotnet restore`

**Rust:**
- Update versions in `Cargo.toml`
- Run `cargo update`

**TypeScript:**
- Run `npm update` or manually update `package.json` for major versions
- Run `npm install`

### Step 3 — Verify

1. Run `make ci` — must pass completely
2. If any tests fail, investigate whether the failure is from the upgrade
3. Report which packages were upgraded and from/to versions

## Rules

- Never downgrade a package
- If a major version upgrade breaks tests, report it and revert that specific upgrade
- Always run the full test suite after upgrading
- Update lock files (`Cargo.lock`, `package-lock.json`) as part of the upgrade
Loading
Loading