Configโdriven checks and automations with native GitHub checks/annotations. PR reviews, issue assistants, release notes, scheduled audits, and webhooks. AIโassisted when you want it, fully predictable when you donโt.
Visor ships with a ready-to-run configuration at defaults/.visor.yaml, so you immediately get:
- A staged review pipeline (
overview โ security โ performance โ quality โ style). - Native GitHub integration: check runs, annotations, and PR comments out of the box.
- Builtโin code assistant: trigger via PR/issue comments (e.g.,
/visor how it works?). - A manual release-notes generator for tagged release workflows.
- No magic: everything is configโdriven in
.visor.yaml; prompts/context are visible and templatable. - Built for scale: composable checks, tag-based profiles, and flexible
extendsfor shared policies.
# .github/workflows/visor.yml
name: Visor
on:
pull_request: { types: [opened, synchronize] } # For fork PRs, see docs/GITHUB_CHECKS.md
issues: { types: [opened] }
issue_comment: { types: [created] }
permissions:
contents: read
pull-requests: write
issues: write
checks: write
jobs:
visor:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: probelabs/visor@v1
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }} # or ANTHROPIC/OPENAI- Visor posts a PR summary, creates GitHub Check runs, and annotates lines.
- Note: For external contributor PRs from forks, check runs may not be available due to GitHub security restrictions. Visor will gracefully fall back to PR comments. See Fork PR Support for how to enable check runs for forks.
version: "1.0"
steps: # or 'checks' (legacy, both work identically)
security:
type: ai
schema: code-review
prompt: "Identify security issues in changed files"
tags: ["fast", "security"]Tip: Pin releases for stability, e.g. uses: probelabs/visor@v1.
For latest changes, use uses: probelabs/visor@nightly. The @main ref is maintained for compatibility but may change frequently and is not recommended for production.
- Node.js 18+ (CI runs Node 20)
- When used as a GitHub Action: appropriate permissions/secrets (see Security Defaults)
- Oneโoff run
npx -y @probelabs/visor@latest --check all --output table
- Project dev dependency
npm i -D @probelabs/visor npx visor --check all --output json
Short cheatsheet for common tasks:
# Validate configuration before running checks
visor validate # Search for .visor.yaml in current directory
visor validate --config .visor.yaml # Validate specific config file
# Run all checks with a table output
visor --check all --output table
# Filter by tags (e.g., fast/local) and increase parallelism
visor --tags fast,local --max-parallelism 5
# Analyze full PR diff vs base branch (like GitHub Actions does)
# Auto-enabled for code-review schemas, or force with --analyze-branch-diff
visor --analyze-branch-diff # Analyzes diff vs main/master branch
visor --check security --analyze-branch-diff # Specific checks on branch diff
# Simulate GitHub events for event-based check filtering
visor --event pr_updated # Run checks triggered by PR updates (auto for code-review)
visor --event issue_opened # Run checks triggered by new issues
visor --event all # Run all checks regardless of event filters (default)
# Emit machineโreadable results and save to a file
visor --check security --output json --output-file visor-results.json
# Discover options
visor --helpSee full options and examples: docs/NPM_USAGE.md
Additional guides:
- fail conditions: docs/fail-if.md
- forEach behavior and dependent propagation (including outputs_raw and history precedence): docs/foreach-dependency-propagation.md
- Failure routing and
on_finish(with outputs_raw in routing JS): docs/failure-routing.md - timeouts and provider units: docs/timeouts.md
- execution limits (run caps for safety): docs/limits.md
- output formatting limits and truncation controls: docs/output-formatting.md
- live execution visualizer and control API: docs/debug-visualizer.md
Write and run integration tests for your Visor config in YAML. No network, builtโin GitHub fixtures, strict by default, and great CLI output.
- Getting started: docs/testing/getting-started.md
- DSL reference: docs/testing/dsl-reference.md
- Flows: docs/testing/flows.md
- Fixtures & mocks: docs/testing/fixtures-and-mocks.md
- Assertions: docs/testing/assertions.md
- Cookbook: docs/testing/cookbook.md
- CLI & reporters: docs/testing/cli.md
- CI integration: docs/testing/ci.md
- Troubleshooting: docs/testing/troubleshooting.md
Note: examples use descriptive step names (e.g., extract-facts, validate-fact) to illustrate patterns. These are not builtโins; the test runner works with whatever steps you define in .visor.yaml.
- Check โ unit of work (
security,performance). - Schema โ JSON shape checks return (e.g.,
code-review). - Template โ renders results (tables/markdown).
- Group โ which comment a check is posted into.
- Provider โ how a check runs (
ai,mcp,http,http_client,command,log,github,claude-code). - Dependencies โ
depends_oncontrols order; independents run in parallel. - Tags โ label checks (
fast,local,comprehensive) and filter with--tags. - Events โ PRs, issues,
/reviewcomments, webhooks, or cron schedules.
Visor is a general SDLC automation framework:
- PR Reviews โ security/perf/style findings with native annotations
- Issue Assistant โ
/visor โฆfor code Q&A and triage - Release Notes โ manual or tagged release workflows
- Scheduled Audits โ cronโdriven checks against main
- Webhooks & HTTP โ receive events, call APIs, and post results
- PolicyโasโCode โ schemas + templates for predictable, auditable outputs
- 90โsecond Quick Start
- Requirements
- Installation
- CLI Usage
- Core Concepts (1 minute)
- Beyond Code Review
- Features
- When to pick Visor
- Developer Experience Playbook
- Tag-Based Check Filtering
- PR Comment Commands
- Suppressing Warnings
- Troubleshooting
- Security Defaults
- Performance & Cost Controls
- Observability
- AI Configuration
- Step Dependencies & Intelligent Execution
- Failure Routing (Auto-fix Loops)
- Claude Code Provider
- GitHub Provider
- AI Session Reuse
- Schema-Template System
- Enhanced Prompts
- SDK (Programmatic Usage)
- Debugging
- Advanced Configuration
- HTTP Integration & Scheduling
- Pluggable Architecture
- GitHub Action Reference
- Output Formats
- Contributing
- Further Reading
- License
- Native GitHub reviews: Check runs, inline annotations, and status reporting wired into PRs.
- Configโfirst: One
.visor.yamldefines checks, prompts, schemas, and templates โ no hidden logic. - Structured outputs: JSON Schema validation drives deterministic rendering, annotations, and SARIF.
- Orchestrated pipelines: Dependencies, parallelism, and tagโbased profiles; run in Actions or any CI.
- Multiโprovider AI: Google Gemini, Anthropic Claude, OpenAI, AWS Bedrock โ plus MCP tools, standalone MCP provider, and Claude Code SDK.
- Author permissions: Built-in functions to customize workflows based on contributor trust level (owner, member, collaborator, etc).
- Assistants & commands:
/reviewto rerun checks,/visor โฆfor Q&A, predictable comment groups. - HTTP & schedules: Receive webhooks, call external APIs, and run cronโscheduled audits and reports.
- Extensible providers:
ai,mcp,http,http_client,log,command,github,claude-code,human-input,memoryโ or add your own. - Security by default: GitHub App support, scoped tokens, remoteโextends allowlist, optโin network usage.
- Observability & control: JSON/SARIF outputs, failโfast and timeouts, parallelism and cost control.
- You want native GitHub checks/annotations and configโdriven behavior
- You need structured outputs (schemas) and predictable templates
- You care about dependencyโaware execution and tagโbased profiles
- You want PR reviews + assistants + scheduled audits from one tool
- You prefer openโsource with no hidden rules
Start with the defaults, iterate locally, and commit a shared .visor.yaml for your team.
Example:
npx -y @probelabs/visor@latest --check all --debugLearn more: docs/dev-playbook.md
Run subsets of checks (e.g., local, fast, security) and select them per environment with --tags/--exclude-tags.
Example:
steps:
security-quick:
type: ai
prompt: "Quick security scan"
tags: ["local", "fast", "security"]CLI:
visor --tags local,fastLearn more: docs/tag-filtering.md
Trigger reviews and assistant actions via comments on PRs/issues.
Examples:
/review
/review --check security
/visor how does caching work?
Learn more: docs/commands.md
Customize workflows based on PR author's permission level using built-in functions in JavaScript expressions:
steps:
# Run security scan only for external contributors
security-scan:
type: command
exec: npm run security:full
if: "!hasMinPermission('MEMBER')"
# Auto-approve PRs from collaborators
auto-approve:
type: command
exec: gh pr review --approve
if: "hasMinPermission('COLLABORATOR') && totalIssues === 0"
# Block sensitive file changes from non-members
protect-secrets:
type: command
exec: echo "Checking permissions..."
fail_if: "!isMember() && files.some(f => f.filename.startsWith('secrets/'))"Available functions:
hasMinPermission(level)- Check if author has >= permission levelisOwner(),isMember(),isCollaborator(),isContributor(),isFirstTimer()- Boolean checks
Learn more: docs/author-permissions.md
Suppress a specific issue by adding a nearby visor-disable comment.
Example (JS):
const testPassword = "demo123"; // visor-disableLearn more: docs/suppressions.md
If comments/annotations donโt appear, verify workflow permissions and run with --debug.
Example:
node dist/index.js --cli --check all --debugRun modes
- Default is CLI mode everywhere (no auto-detection).
- For GitHub-specific behavior (comments, checks), run with
--mode github-actionsor setwith: mode: github-actionswhen using the GitHub Action.
Examples:
# Local/CI CLI
npx -y @probelabs/visor@latest --config .visor.yaml --check all --output json
# GitHub Actions behavior from any shell/CI
npx -y @probelabs/visor@latest --mode github-actions --config .visor.yaml --check allGitHub Action usage:
- uses: probelabs/visor@vX
with:
mode: github-actions
checks: all
output-format: jsonTo force CLI mode inside a GitHub Action step, you can still use:
env:
VISOR_MODE: cliLearn more: docs/troubleshooting.md
Prefer a GitHub App for production, and restrict remote extends unless explicitly allowed.
Examples:
visor --no-remote-extends
visor --allowed-remote-patterns "https://raw.githubusercontent.com/myorg/"Learn more: docs/security.md
Use tags for fast lanes and raise parallelism cautiously.
Example:
visor --tags local,fast --max-parallelism 5Learn more: docs/performance.md
Use JSON for pipelines or SARIF for code scanning. To avoid any chance of logs mixing with the result stream, prefer the builtโin --output-file.
Examples:
visor --check security --output json --output-file visor-results.json
visor --check security --output sarif --output-file visor-results.sarifLearn more: docs/observability.md
Set one provider key (Google/Anthropic/OpenAI/AWS Bedrock) via env.
Example (Action):
- uses: probelabs/visor@v1
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
# Or for AWS Bedrock:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# AWS_REGION: us-east-1Learn more: docs/ai-configuration.md
Define depends_on to enforce order; independent checks run in parallel.
Example:
steps:
security: { type: ai }
performance:{ type: ai, depends_on: [security] }Learn more: docs/dependencies.md. See also: forEach dependency propagation
Quick example (outputs_raw):
version: "2.0"
checks:
list:
type: command
exec: echo '["a","b","c"]'
forEach: true
summarize:
type: script
depends_on: [list]
content: |
const arr = outputs_raw['list'] || [];
return { total: arr.length };
branch-by-size:
type: script
depends_on: [list]
content: 'return true'
on_success:
goto_js: |
return (outputs_raw['list'] || []).length >= 3 ? 'after' : null;
after:
type: log
message: bulk mode reachedAutomatically remediate failures and reโrun steps using configโdriven routing:
- Perโstep
on_failandon_successactions:retrywith fixed/exponential backoff (+ deterministic jitter)run: remediation steps (single or list)goto: jump back to an ancestor step and continue forwardgoto_js/run_js: dynamic routing with safe, synchronous JS
- Loop safety:
- Global
routing.max_loopsper scope to prevent livelock - Perโstep attempt counters; forEach items have isolated counters
- Global
Example (retry + goto on failure):
version: "2.0"
routing:
max_loops: 5
steps:
setup: { type: command, exec: "echo setup" }
build:
type: command
depends_on: [setup]
exec: |
test -f .ok || (echo first try fails >&2; touch .ok; exit 1)
echo ok
on_fail:
goto: setup
retry: { max: 1, backoff: { mode: exponential, delay_ms: 400 } }Example (on_success jumpโback once):
steps:
unit: { type: command, exec: "echo unit" }
build:
type: command
depends_on: [unit]
exec: "echo build"
on_success:
run: [notify]
goto_js: |
// Jump back only on first success
return attempt === 1 ? 'unit' : null;
notify: { type: command, exec: "echo notify" }Learn more: docs/failure-routing.md
Use the Claude Code SDK as a provider for deeper analysis.
Example:
steps:
claude-review:
type: claude-code
prompt: "Analyze code complexity"Learn more: docs/claude-code.md
Reuse conversation context between dependent AI checks for smarter followโups.
Two modes available:
clone(default): Independent copy of history for parallel follow-upsappend: Shared conversation thread for sequential multi-turn dialogue
Example:
steps:
security: { type: ai }
remediation:
type: ai
depends_on: [security]
reuse_ai_session: true # Clones history by default
verify:
type: ai
depends_on: [remediation]
reuse_ai_session: true
session_mode: append # Shares history for full conversationLearn more: docs/advanced-ai.md
Schemas validate outputs; templates render GitHubโfriendly comments.
Example:
steps:
security:
type: ai
schema: code-review
prompt: "Return JSON matching code-review schema"Learn more: docs/schema-templates.md
Write prompts inline or in files; Liquid variables provide PR context.
Example:
steps:
overview:
type: ai
prompt: ./prompts/overview.liquidLearn more: docs/liquid-templates.md
Run Visor programmatically from Node.js without shelling out. The SDK is a thin faรงade over the existing engine.
Install:
npm i -D @probelabs/visorESM Example:
import { loadConfig, runChecks } from '@probelabs/visor/sdk';
const config = await loadConfig();
const result = await runChecks({
config,
checks: Object.keys(config.checks || {}),
output: { format: 'json' },
});
console.log('Total issues:', result.reviewSummary.issues?.length ?? 0);CommonJS Example:
const { loadConfig, runChecks } = require('@probelabs/visor/sdk');
(async () => {
const config = await loadConfig();
const result = await runChecks({
config,
checks: Object.keys(config.checks || {}),
output: { format: 'json' }
});
console.log('Total issues:', result.reviewSummary.issues?.length ?? 0);
})();Key Functions:
loadConfig(configPath?: string)โ Load Visor configresolveChecks(checkIds, config)โ Expand check IDs with dependenciesrunChecks(options)โ Run checks programmatically
Learn more: docs/sdk.md
Comprehensive debugging tools help troubleshoot configurations and data flows:
Use log() in JavaScript expressions:
steps:
conditional-check:
if: |
log("Outputs:", outputs);
outputs["fetch-data"]?.status === "ready"
transform_js: |
// `output` is autoโparsed JSON when possible; no JSON.parse needed
log("Raw data:", output);
outputUse json filter in Liquid templates:
steps:
debug-check:
type: logger
message: |
Outputs: {{ outputs | json }}
PR: {{ pr | json }}Enable debug mode:
visor --check all --debugLearn more: docs/debugging.md
Extend shared configs and override perโrepo settings.
Example:
extends:
- default
- ./team-standards.yamlLearn more: docs/configuration.md
Receive webhooks, call APIs, and schedule checks.
Examples:
http_server: { enabled: true, port: 8080 }
steps:
nightly: { type: ai, schedule: "0 2 * * *" }Learn more: docs/http.md
Mix providers (ai, mcp, http, http_client, log, command, script, github, claude-code) or add your own.
- Command Provider: Execute shell commands with templating and security - docs/command-provider.md
- Script Provider: Run JavaScript in a secure sandbox - docs/script.md
- MCP Provider: Call MCP tools directly via stdio, SSE, or HTTP transports - docs/mcp-provider.md
- MCP Tools for AI: Enhance AI providers with MCP context - docs/mcp.md
- Custom Providers: Build your own providers - docs/pluggable.md
Common inputs include max-parallelism, fail-fast, and config-path.
Example:
- uses: probelabs/visor@v1
with:
max-parallelism: 5Learn more: docs/action-reference.md
Emit table, json, markdown, or sarif.
Example:
visor --check security --output jsonLearn more: docs/output-formats.md
Learn more: CONTRIBUTING.md
- Failure conditions schema: docs/failure-conditions-schema.md
- Failure conditions implementation notes: docs/failure-conditions-implementation.md
- Recipes and practical examples: docs/recipes.md
- ForEach outputs and precedence (outputs vs outputs_raw vs history): docs/foreach-dependency-propagation.md
- Failure routing and on_finish aggregation (with outputs_raw in routing): docs/failure-routing.md
- Example config using outputs_raw: examples/outputs-raw-basic.yaml
MIT License โ see LICENSE
Use the native GitHub provider for safe labels and comments without invoking the gh CLI.
Example โ apply overviewโderived labels to a PR:
steps:
apply-overview-labels:
type: github
op: labels.add
values:
- "{{ outputs.overview.tags.label | default: '' | safe_label }}"
- "{{ outputs.overview.tags['review-effort'] | default: '' | prepend: 'review/effort:' | safe_label }}"
value_js: |
return values.filter(v => typeof v === 'string' && v.trim().length > 0);See docs: docs/github-ops.md
Visor ships a YAMLโnative integration test runner so you can describe user flows, mocks, and assertions alongside your config.
- Start here: docs/testing/getting-started.md
- CLI details: docs/testing/cli.md
- Fixtures and mocks: docs/testing/fixtures-and-mocks.md
- Assertions reference: docs/testing/assertions.md
Example suite: defaults/.visor.tests.yaml