Stakeholder Hub for Enhancement Request Prioritization & Action
Percona's product intelligence platform. Aggregates evidence from docs, Jira, Clari call transcripts, ClickHouse telemetry, Slack, and community forums to score features with transparent, reproducible metrics. Built to prevent opinion-based prioritization: every score shows why it ranks where it does, and sparse data is visible rather than hidden behind a single number.
Built on Gilad/Cagan discovery principles. 100% vibe coded with Claude.
Aggregates customer demand from multiple sources into scored, deduplicated signals:
- Dual scoring: Business Impact (MRR, deal blockers, churn risk) + User Value (community, surveys, calls)
- Source diversity tracking — signals corroborated across source types rank higher
- Confidence levels (Strong / Moderate / Weak) based on evidence quality and breadth
- Anti-SCORE warnings flag Sparse Evidence, Single Source, Stale Data
Searchable repository of all ingested evidence items with source attribution, customer context, and signal linkage.
Evaluates Percona Server features for cut-or-keep decisions using evidence from 6 data sources:
- Evidence Score (0-100) — Composite of docs pageviews, Jira tickets, telemetry adoption, Clari call mentions, Slack mentions, and forum threads. Log-scaled per source with a diversity bonus for breadth.
- Impact Score (0-100) — Measures cut urgency: 50% evidence deficit + 25% docs deficit + 25% Jira maintenance burden. Higher = better cut candidate.
- Telemetry Adoption — Real instance counts from ClickHouse (82K+ PS/PXC instances reporting). Features detected via active_plugins, active_components, and server_config.
- Evidence Items — Individual Slack messages, Jira tickets, Clari call excerpts, and forum threads collected per feature.
- Freeze/Unfreeze — Lock a feature's evidence snapshot for stakeholder review.
Email-verified voting on published features. Three importance levels (Nice to have / Important / Critical) with breakdown bars. Comments with display names and masked emails.
Slash commands for search, logging, top signals, signal details, and weekly digests. Proactive anomaly scanning every 4 hours.
Voter management, spam removal, comment moderation, feature CRUD, AI-generated feature descriptions.
Browser ──► nginx (SSL) ──► gunicorn ──► Flask (server.py, 1700 LOC)
│
├── portal.db (SQLite WAL)
│ ├── voters / votes / comments
│ ├── feature_descriptions / feature_summaries
│ ├── signals / evidence / signal_evidence
│ └── cut_keep_features / cut_keep_comments
│
├── Notion API (signals + evidence DBs)
├── Clari Copilot API (call transcripts)
├── ClickHouse (telemetry adoption data)
├── SMTP (verification emails)
├── Rybbit (self-hosted analytics)
│
├── demand/ (Demand Signal Engine)
│ ├── clari_connector.py
│ ├── ingestion.py / matching.py / scoring.py
│ ├── notion_sync.py / git_sync.py
│ └── slack_notify.py
│
└── bot/ (Slack Bot)
├── handlers.py
├── search.py
└── notifications.py
| Environment | Port | URL | Data |
|---|---|---|---|
| Production | 3000 | sherpa.int.percona.com |
~/sherpa-prod/data/portal.db |
| Staging | 3001 | sherpa-staging.int.percona.com |
~/sherpa-staging/data/portal.db |
Both require Percona VPN.
| Source | What | Used In |
|---|---|---|
| Percona Docs | Pageviews (24-month) per feature doc | Cut/Keep evidence score |
| Jira | Ticket counts (bugs, tasks, stories) | Cut/Keep evidence + impact scores |
| ClickHouse Telemetry | Plugin/component adoption across 82K+ instances | Cut/Keep evidence score |
| Clari Copilot | Customer call transcript mentions | Cut/Keep evidence, Demand signals |
| Slack | Channel message mentions (public + private) | Cut/Keep evidence |
| Community Forums | Thread counts per feature | Cut/Keep evidence score |
| Notion | Demand signal + evidence databases | Signals, Evidence pages |
| Salesforce | MRR, deal data (via demand engine) | Demand signal scoring |
| Rybbit | Portal usage analytics | Admin insights |
| Script | Purpose | Run |
|---|---|---|
update_telemetry.py |
Pull feature adoption counts from ClickHouse into cut_keep_features | Requires CLICKHOUSE_USER + CLICKHOUSE_PASSWORD |
update_cut_keep_evidence.py |
Recompute evidence scores, summaries, and items for all 31 features | Runs during deploy; also standalone |
seed_cut_keep.py |
Seed initial cut/keep feature rows | Runs during deploy (idempotent) |
seed_new_features.py |
Bulk-add voting portal features from Notion | One-time / as needed |
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Required
export NOTION_API_KEY="ntn_xxxxxxxxx"
export PORTAL_ADMIN_KEY="your-secret-admin-key"
# Optional: SMTP (without it, verification codes print to console)
export SMTP_HOST="smtp.gmail.com"
export SMTP_PORT="587"
export SMTP_USER="sherpa@percona.com"
export SMTP_PASS="app-password-here"
# Optional: Slack Bot
export SLACK_BOT_TOKEN="xoxb-..."
export SLACK_SIGNING_SECRET="..."
export SLACK_CHANNEL_ID="C0123456789"
# Optional: ClickHouse telemetry
export CLICKHOUSE_USER="claudeai_ro"
export CLICKHOUSE_PASSWORD="..."
python server.py
# http://localhost:3000# Deploy to staging
ssh sherpa 'cd ~/SHERPA && bash deploy.sh staging'
# Deploy to production
ssh sherpa 'cd ~/SHERPA && bash deploy.sh prod'
# First-time setup
ssh sherpa 'cd ~/SHERPA && bash deploy.sh setup'The deploy script handles: git pull, pip install, DB migrations, seed scripts, evidence recomputation, and service restart.
SHERPA/
├── server.py # Flask backend (1700 LOC) — all routes, DB migrations, API
├── wsgi.py # Gunicorn entry point
├── requirements.txt
├── deploy.sh # Isolated env deploy (staging + prod)
│
├── static/
│ ├── index.html # Voting portal
│ ├── signals.html # Demand signals list
│ ├── signal-detail.html # Signal detail view
│ ├── evidence.html # Customer evidence list
│ ├── cut-keep.html # Cut/Keep sweep table
│ ├── cut-keep-detail.html # Cut/Keep feature detail
│ ├── admin.html # Admin panel
│ ├── header.js # Shared navigation + auth
│ ├── tokens.css / components.css / header.css
│ ├── favicon.svg / logo-small.png
│ └── product-icons/ # Per-product SVG icons
│
├── demand/ # Demand Signal Engine
│ ├── clari_connector.py # Clari Copilot API integration
│ ├── ingestion.py # Extract → Classify → Match → Store
│ ├── matching.py # LLM problem-level match + keyword fallback
│ ├── scoring.py # Dual scoring: Business Impact + User Value
│ ├── notion_sync.py # Notion DB read/write sync
│ ├── git_sync.py # Git-backed canonical signal store
│ ├── slack_notify.py # Slack webhook notifications
│ └── models.py # DemandSignal, CustomerEvidence
│
├── bot/ # Slack Bot (@sherpa)
│ ├── handlers.py # /sherpa slash commands + events
│ ├── search.py # Enterprise signal search
│ └── notifications.py # Digest + anomaly formatting
│
├── update_cut_keep_evidence.py # Evidence scoring + data collection
├── update_telemetry.py # ClickHouse telemetry ingestion
├── seed_cut_keep.py # Seed 31 PS/PXC features
├── seed_new_features.py # Seed voting portal features
│
└── deploy/
├── sherpa-prod.service # systemd unit (port 3000)
├── sherpa-staging.service # systemd unit (port 3001)
└── sherpa.nginx # nginx reverse proxy config