Skip to content

PlatformNetwork/bounty-bot

Repository files navigation

Bounty Bot

Autonomous GitHub issue validation engine for bug bounty programs

Architecture · API · Detection · Rules · Deployment · Configuration

CI

Bounty Bot Banner


Overview

Bounty Bot is the validation backbone for PlatformNetwork's bug bounty program. Every issue submitted to bounty-challenge is automatically ingested, scored through a multi-stage detection pipeline, and labeled with a verdict — all without human intervention.

Core principles

  • LLM-assisted evaluation with deterministic rule enforcement.
  • Zero-trust design: HMAC-signed inter-service calls, no hardcoded secrets.
  • Extensible rules engine — drop a .ts file in rules/ and it's live.

How It Works

flowchart LR
    subgraph Ingest
        GH[GitHub Webhook]
        POLL[Poller]
        API[Atlas Command]
    end

    subgraph Validate["Validation Pipeline"]
        direction TB
        MEDIA[Media Check] --> SPAM[Spam Detection]
        SPAM --> DUP[Duplicate Detection]
        DUP --> EDIT[Edit History]
        EDIT --> CODE["Code Rules\n(rules/code/)"]
        CODE --> LLM["LLM Gate\n+ LLM Rules\n(rules/llm/)"]
    end

    subgraph Output
        GH_MUT[GitHub Labels\n+ Comments]
        ATLAS_CB[Atlas Callback]
        DB[(SQLite)]
    end

    GH --> Validate
    POLL --> Validate
    API --> Validate
    Validate --> GH_MUT
    Validate --> ATLAS_CB
    Validate --> DB
Loading

Each issue passes through six stages. A failure at any stage short-circuits to a verdict:

Stage What it checks Failure verdict
Media Screenshot/video present and accessible (HTTP 200) invalid
Spam Template similarity, burst frequency, parity scoring invalid
Duplicate Jaccard + Qwen3 cosine hybrid (0.4J + 0.6C) duplicate
Edit History Suspicious post-submission edits (evidence swaps) invalid
Code Rules Programmatic checks from rules/code/*.ts invalid or penalty
LLM Gate Gemini 3.1 Pro + LLM instructions from rules/llm/*.ts invalid

If all stages pass, the issue is labeled valid.


Quick Start

git clone https://github.com/PlatformNetwork/bounty-bot.git
cd bounty-bot
cp .env.example .env   # fill in your tokens
npm install
npm run dev

Or with Docker:

docker compose up -d   # starts bounty-bot + Redis + Watchtower

The API listens on port 3235. Watchtower auto-updates from GHCR every 60 seconds.


API

All /api/v1/* endpoints require HMAC authentication (X-Signature + X-Timestamp).

Method Endpoint Description
POST /api/v1/validation/trigger Trigger validation
GET /api/v1/validation/:issue/status Get verdict status
POST /api/v1/validation/:issue/requeue Re-validate (24h window)
POST /api/v1/validation/:issue/force-release Clear stale lock
GET /api/v1/rules List loaded rules
POST /api/v1/rules/reload Hot-reload rules from disk
GET /health Liveness probe

Full schemas and examples: docs/API.md


Rules Engine

Two kinds of rules, two directories:

rules/
  code/               # Programmatic checks — executed by the engine
    validity.ts       # body length, title quality, structure
    media.ts          # evidence requirements
    spam.ts           # template detection, generic titles
    content.ts        # profanity, length limits, context
    scoring.ts        # penalty weight adjustments
  llm/                # LLM instructions — injected into the prompt
    evaluation.ts     # evidence priority, reproducibility, confidence
    tone.ts           # professional tone, no sympathy verdicts
    spam-detection.ts # template farming, AI filler, screenshot mismatch
    output-format.ts  # tool usage, reasoning order, no internal leaks

Code rules run programmatically and produce pass/fail results. They short-circuit the pipeline:

Severity Effect
reject Instant invalid verdict
require Must pass or invalid
penalize Adds weight to penalty score
flag Logged but no verdict change

LLM rules are natural-language instructions injected into the model's system prompt, ordered by priority (critical > high > normal > low). They shape how the model reasons and phrases its verdict.

Hot-reload without restart: POST /api/v1/rules/reload

Full documentation: docs/RULES.md


LLM Integration

Two models via OpenRouter — both degrade gracefully if no API key is set.

Model Purpose Used in
google/gemini-3.1-pro-preview-customtools Issue evaluation with function calling deliver_verdict tool
qwen/qwen3-embedding-8b Semantic duplicate detection Cosine similarity vectors

The LLM receives pre-computed detection scores and rule evaluation results in its prompt, so rules directly influence the model's reasoning.


Testing

npm test             # 149 tests (vitest)
npm run typecheck    # tsc --noEmit
npm run lint         # eslint

Documentation

Document Description
Architecture System design, module graph, sequence diagrams, database schema
API Reference Full REST API with request/response schemas
Detection Engine Spam, duplicate, edit-history, and LLM scoring internals
Rules Engine How to write, load, and manage validation rules
Configuration All environment variables and their defaults
Deployment Docker, Redis, Watchtower, Atlas integration

Controlled by Atlas · Part of PlatformNetwork

About

GitHub bounty validation bot - controlled by Atlas via API

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages