Skip to content

TZX33/skill-upgrader

Repository files navigation

skill-upgrader

Compare, evaluate, and upgrade your AI agent skills — so they never go stale.

License: MIT Standard: SKILL.md

English | 简体中文 | 日本語


Skills for AI coding agents (Claude Code, Cursor, Codex, Antigravity) keep getting better — but once you install one, it sits there forever. New versions ship, better alternatives appear, and your skills silently rot.

skill-upgrader fixes this. It gives your AI agent a structured process for comparing, evaluating, merging, and upgrading SKILL.md skills — across any tool that supports the standard.

Quick Start

1. Install (30 seconds)

Copy the skill-upgrader folder into your project's skills directory:

# Clone
git clone https://github.com/TZX33/skill-upgrader.git /tmp/skill-upgrader

# Copy to your project (Claude Code)
cp -r /tmp/skill-upgrader ~/.claude/skills/skill-upgrader

# Or copy to any project directory
cp -r /tmp/skill-upgrader your-project/skills/skill-upgrader

2. Compare two skills

When you find a new skill that overlaps with one you already have:

You:    I found a better code-review skill. Compare it with my current one.
Agent:  [loads both SKILL.md files]
        [scores each on 6 dimensions]
        [recommends: Replace / Keep / Merge]

3. Check for updates

You:    Check if my skills have upstream updates.
Agent:  [reads `source` field from each skill's frontmatter]
        [fetches remote version]
        [reports what changed]

Or use the script directly:

python3 skills/skill-upgrader/scripts/check_update.py skills/my-skill

That's it. You'll know if this is for you.


Five Scenarios

# Scenario When to use What happens
S1 Compare Found a skill that overlaps with an existing one Side-by-side scoring → Replace / Keep / Merge
S2 Update Want to check if installed skills have newer versions Read source frontmatter → fetch remote → diff → upgrade
S3 Merge Two skills each have strengths you want Pick trunk → graft best elements → validate
S4 Evaluate Discovered a skill, unsure if worth installing Score against ideal → fitness check → recommend
S5 Audit Periodic housekeeping Inventory → stale detection → cleanup recommendations

Evaluation Framework

Every comparison and evaluation uses 6 weighted dimensions:

Dimension Weight What it measures
Description trigger coverage 20% Will the agent actually find and use this skill?
Execution step clarity 25% Can the agent execute without ambiguity?
Writing style compliance 10% Follows imperative form, consistent voice?
Resource completeness 15% Are scripts/references/assets used where they add value?
Project fit 20% Does this match YOUR project's needs?
Maintainability 10% Is it reasonably sized and well-structured?

Detailed scoring rubrics with examples: references/evaluation.md

Decision Tree

Compare complete
 │
 ├── New ≥ Old on ALL dimensions
 │   └── ✅ Path A: REPLACE
 │       Archive old → install new → done
 │
 ├── New ≤ Old on ALL dimensions
 │   └── 📌 Path B: KEEP existing
 │       Log evaluation → done
 │
 └── Mixed results
     └── 🔀 Path C: MERGE
         Pick trunk (higher project-fit)
         → graft stronger elements from the other
         → validate merged result
         → done

Upstream Tracking

Add source and version to your skill's frontmatter to enable update checking:

---
name: my-skill
description: ...
source: https://github.com/org/repo
version: 2026-03-20
---

Then check for updates:

# Single skill
python3 skills/skill-upgrader/scripts/check_update.py skills/my-skill

# With section-level diff
python3 skills/skill-upgrader/scripts/check_update.py skills/my-skill --diff

Without source, skills are treated as local-only and skipped during update checks.

Compatibility

skill-upgrader works with any AI tool that supports the SKILL.md standard:

Tool Skill Discovery Update Script
Claude Code ✅ Native
Antigravity ✅ Native
Codex CLI ✅ Native
Cursor ⚠️ Via Rules
OpenCode ⚠️ Via AGENTS.md bridge

Project Structure

skill-upgrader/
├── SKILL.md              # Core instructions (5 scenarios + decision tree)
├── README.md             # English (this file)
├── README_zh.md          # 简体中文
├── README_ja.md          # 日本語
├── LICENSE               # MIT
├── references/
│   └── evaluation.md     # Detailed scoring rubrics with examples
└── scripts/
    └── check_update.py   # CLI tool for upstream version checking

Why This Exists

The SKILL.md ecosystem is growing fast. skill-creator handles creation. But nobody handles the lifecycle after creation:

  • You install a skill → it works → 6 months later the source repo shipped 20 improvements → your copy is stale
  • You find two skills that do similar things → which one is better? → no framework for deciding
  • You merge two skills manually → mixed styles, forgotten sections → quality degrades

skill-upgrader closes this gap. Think of it as dependency management for AI agent skills.

Real-World Examples

🧑‍💻 "I had two code-review skills — one from gstack and one I wrote myself. skill-upgrader scored them side-by-side and helped me merge the best parts of both into one. Saved me an hour of manual comparison."

@TZX33

Using skill-upgrader? Share your story →

📣 Share Your Experience

Found it useful? Here's how to help this project grow:

Every real-world use case helps this tool get better. See CONTRIBUTING.md for details.

Contributing

Found a scenario that isn't covered? An evaluation dimension that's missing?

  1. Fork this repo
  2. Add your scenario to SKILL.md
  3. If it needs scoring criteria, update references/evaluation.md
  4. Submit a PR with a concrete example of when your change would help

License

MIT — free forever, do what you want.

Releases

No releases published

Packages

 
 
 

Contributors

Languages