An open-source pre-deployment risk intelligence platform for infrastructure changes.
DeployWhisper helps platform engineers, DevOps teams, and SREs review deployment artifacts before release. It analyzes Terraform, Kubernetes, Ansible, Jenkins, and CloudFormation inputs, then turns those inputs into a single advisory briefing with risk scoring, blast radius context, rollback guidance, and plain-English narrative output.
Quick links: Quick Start · Skills Registry · API Endpoints · Development · Contributing · Open Source
- Introduction
- How It Works
- Key Features
- Screenshots And Demo
- Current Status
- Prerequisites
- Quick Start
- Docker Deployment
- API Endpoints
- CLI Usage
- Configuration
- Architecture
- Documentation
- Development
- CI
- Contributing
- Open Source
- Roadmap Signals
- Status
DeployWhisper exists because deployment risk is rarely visible in a single file. A Terraform change might look safe on its own, a Kubernetes manifest might look routine on its own, and a Jenkins pipeline change might look minor on its own, but the real risk often appears only when those artifacts are reviewed together.
DeployWhisper treats deployment review as a context problem. It combines multi-tool parsing, local-first analysis, tool-specific AI Skills, incident-memory matching, and advisory summaries so teams can make better go/no-go decisions before changes reach production.
The current implementation is built as a pure-Python application with:
- NiceGUI for the operator-facing web UI
- FastAPI for the versioned API surface
- SQLAlchemy and SQLite for persistence
- Direct SDK adapters for OpenAI, Anthropic, Gemini, and local Ollama narrative generation
- Upload one or more supported deployment artifacts.
- DeployWhisper detects the tool type and filters unsupported or sensitive inputs.
- Parsers normalize changes into a shared internal model.
- The analysis pipeline scores risk, computes blast radius, generates rollback guidance, and checks incident similarity.
- A narrative layer produces a plain-English deploy briefing.
- The report is persisted with audit metadata before it is shown in the UI or returned through the API or CLI.
At a high level:
Artifacts -> Parse -> Normalize -> Score -> Blast Radius -> Rollback -> Narrative -> Advisory Report
- Multi-tool intake for Terraform, Kubernetes, Ansible, Jenkins, and CloudFormation
- Plain-English risk narrative and deploy recommendation
- Advisory-only output with explicit human-review posture
- Blast radius analysis using a project-scoped service-topology graph with a shared multi-source import foundation
- Rollback plan generation with complexity signaling
- Incident-history matching for operational memory
- API, CLI, and web entrypoints over one shared analysis pipeline
- Local-first security model that keeps raw IaC local and avoids persisting API keys
- Custom AI Skills for team-specific domain guidance
- Public Skills Registry for published built-in skills: https://deploywhisper.github.io/skills-registry/
- Analysis history and audit metadata for later review
Upload flow, risk summary, and advisory result

Saved analyses, filters, and audit metadata

Provider config, topology upload, and custom skills

DeployWhisper is an open-source project in active development. The current released version is useful today for teams that want a local-first, advisory review layer before infrastructure changes are shipped.
What users can use today:
- Web review workflow: upload deployment artifacts in the NiceGUI dashboard and get a persisted advisory report with risk score, severity, recommendation, findings, evidence, context quality, blast radius, rollback guidance, and audit metadata.
- Multi-tool analysis: analyze Terraform, Kubernetes, Ansible, Jenkins, and CloudFormation inputs through one shared pipeline instead of reviewing every tool in isolation.
- LLM-assisted narrative: connect deterministic scoring with plain-English deployment guidance using Ollama, OpenAI, Anthropic, Gemini, OpenRouter, Groq, or xAI provider settings.
- Local-first safety posture: keep raw IaC processing local, avoid storing provider API keys in the database, exclude sensitive files from unsafe handling, and keep every verdict advisory rather than automatically blocking a release.
- Evidence-backed confidence: trace the report back to findings, resource-level contributors, uploaded artifact references, parser coverage, topology freshness, and warning signals when context is limited.
- Blast-radius and rollback context: use service-topology input to explain likely downstream impact and generate rollback steps with complexity scoring.
- Analysis history: review saved reports later, filter previous analyses, inspect audit metadata, and compare repeated scans of the same artifact set.
- Provider and admin settings UI: configure LLM provider metadata, upload topology context, manage custom AI Skills, and see provider readiness before running analysis.
- REST API and CLI access: run the same analysis pipeline from
/api/v1endpoints or the headless CLI for local automation and CI workflows. - Shareable reports: create read-only report links, optionally protect sensitive shared reports with a password, redact filenames, and compare shared reruns when previous scans exist.
- Published Skills Registry: browse published built-in skills at https://deploywhisper.github.io/skills-registry/ and extend guidance with custom skills.
- Published GitHub Action path: use the dedicated
deploywhisper/analyze-action@v1action to analyze PR artifact changes, post/update an advisory PR comment, and expose report outputs for follow-on workflow steps. - Published container path: run the released container image
ghcr.io/deploywhisper/deploywhisper:1.0.0with SQLite-backed persistence for a self-hosted single-container setup. - Project quality baseline: GitHub Actions CI, Python quality checks, sharded tests, local CI scripts, and optional UI accessibility smoke checks are in place.
Why this gives users value:
- Platform and DevOps teams get a faster first-pass deployment review before a human approval meeting.
- Engineers can see why a change is risky, which resource/file caused it, what to verify, and what rollback concern to discuss.
- Teams can build trust gradually because reports keep deterministic evidence visible even when LLM narrative is enabled.
- Open-source users can start locally with Docker or Python, then add provider keys, topology context, custom skills, and GitHub PR automation when they are ready.
What is still evolving:
- Production-grade authn/authz for shared deployments
- Lightweight project/workspace scoping for cleaner multi-repository history isolation
- Richer incident ingestion workflows
- Broader deployment integrations and release automation
- More complete NFR hardening for shared or internet-facing environments
Planning and design artifacts live under _bmad-output/planning-artifacts/.
This README describes the current implementation honestly. Some sections reflect the implemented foundation, while others point to the intended open-source direction captured in the planning artifacts.
- Python 3.11 or newer recommended
- A virtual environment for local development
- Optional LLM provider access:
- Ollama for fully local mode
- or OpenAI / Anthropic / Gemini / OpenRouter / Groq / xAI credentials via environment variables
Tier-1 providers use direct adapters:
- OpenAI via the official
openaiSDK - Anthropic via the official
anthropicSDK - Gemini via the official
google-genaiSDK - Ollama via the local HTTP adapter
OpenRouter, Groq, and xAI remain on the compatibility path through one explicit OpenAI-compatible adapter. For local-only narrative generation, Ollama is the intended path.
Provider settings and health surfaces also expose explicit capability metadata for structured output, local-only mode, remote MCP, local MCP, and tool approval. These MCP-related flags are planning metadata only for now; DeployWhisper does not execute MCP tools in the current narrative flow.
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt
python app.pyThe app starts on:
- UI:
http://127.0.0.1:8080/ - API docs:
http://127.0.0.1:8080/api/v1/docs - Health:
http://127.0.0.1:8080/api/v1/health
./.venv/bin/python -m unittest discover -qCredentialed provider smoke tests are opt-in so the default suite remains offline:
DEPLOYWHISPER_LIVE_PROVIDER_SMOKE=1 \
DEPLOYWHISPER_LIVE_PROVIDER_SMOKE_PROVIDERS=openai \
./.venv/bin/python -m unittest tests.test_llm.test_live_provider_smoke -qThe smoke test loads provider keys from environment variables or .env and only
runs providers with usable keys. Provider-specific model and API-base overrides
use DEPLOYWHISPER_LIVE_<PROVIDER>_MODEL and
DEPLOYWHISPER_LIVE_<PROVIDER>_API_BASE.
DeployWhisper supports a single-container deployment model.
example: Docker compose file docker-compose.yml
services:
deploywhisper:
# If you want to use the already published image, uncomment the "image" section and comment out the build section.
image: ghcr.io/deploywhisper/deploywhisper:1.0.0
ports:
- "8080:8080"
restart: unless-stopped
init: true
environment:
APP_HOST: 0.0.0.0
APP_PORT: 8080
APP_BASE_URL: http://localhost:8080
LOG_LEVEL: INFO
DATABASE_URL: sqlite:///data/deploywhisper.db
DEPLOYWHISPER_SHARE_TOKEN: "DEPLOYWHISPER_API_TOKEN-for-test-123"
LLM_PROVIDER: ollama
LLM_MODEL: ollama/gemma4:e4b
LLM_API_BASE: http://host.docker.internal:11434
LLM_API_KEY: ""
OPENAI_API_KEY: ""
ANTHROPIC_API_KEY: ""
GEMINI_API_KEY: ""
GOOGLE_API_KEY: ""
OPENROUTER_API_KEY: ""
GROQ_API_KEY: ""
XAI_API_KEY: ""
volumes:
- deploywhisper-data:/app/data
volumes:
deploywhisper-data:after create this file then run:
docker-compose up -dDefault container behavior:
- Port
8080exposed from the app container - SQLite database stored under
/app/data - Default provider set to Ollama directly in
docker-compose.yml POST /api/v1/analyses/{id}/sharestays disabled untilDEPLOYWHISPER_SHARE_TOKENis set in Compose
Provider settings in the checked-in docker-compose.yml are literal container
environment values. The file does not rely on project-root .env interpolation.
For machine-specific provider settings, create an untracked
docker-compose.override.yml and put only your local overrides there.
Example docker-compose.override.yml for OpenAI:
services:
deploywhisper:
environment:
LLM_PROVIDER: openai
LLM_MODEL: gpt-4.1-mini
LLM_API_BASE: https://api.openai.com/v1
OPENAI_API_KEY: your-real-keyExample override for Groq:
services:
deploywhisper:
environment:
LLM_PROVIDER: groq
LLM_MODEL: groq/qwen/qwen3-32b
LLM_API_BASE: https://api.groq.com/openai/v1
GROQ_API_KEY: your-real-keyExample override for local Ollama:
services:
deploywhisper:
environment:
LLM_PROVIDER: ollama
LLM_MODEL: ollama/llama3
LLM_API_BASE: http://host.docker.internal:11434After editing Compose settings, recreate the container:
docker compose up -d --force-recreateIf provider settings were already saved in the DeployWhisper settings page,
those non-secret database settings take precedence over LLM_PROVIDER,
LLM_MODEL, and LLM_API_BASE; API keys still come only from container
environment variables or runtime secrets. When you select a provider in the
settings page, DeployWhisper resolves that provider's environment key
(GROQ_API_KEY for Groq, OPENAI_API_KEY for OpenAI, or fallback
LLM_API_KEY) and pre-fills the API key field from the running container.
Saving the settings page activates the selected provider as the single runtime
provider.
Do not bake secrets into the image. For local Docker Compose, keep secrets in
an untracked docker-compose.override.yml; for managed deployments, use your
orchestration platform's secret mechanism.
Direct docker run example:
docker run -d \
-p 8080:8080 \
-e APP_HOST=0.0.0.0 \
-e APP_PORT=8080 \
-e APP_BASE_URL=https://deploywhisper.example.com \
-e DEPLOYWHISPER_SHARE_TOKEN=replace-with-a-long-random-secret \
ghcr.io/deploywhisper/deploywhisper:1.0.0GET /api/v1/healthResponse shape:
{
"data": {
"status": "ok",
"mode": "foundation",
"core_status": "ok",
"llm": {
"status": "ok",
"ready": true,
"provider": "ollama",
"model": "ollama/llama3",
"local_mode": true,
"requires_api_key": false,
"has_api_key": false,
"message": "LLM provider connection validated for analysis.",
"source": "environment"
}
},
"meta": {
"app": "DeployWhisper",
"version": "0.1.0"
}
}POST /api/v1/analysesUpload one or more artifact files as multipart form-data.
Response includes:
- intake summary
- parse batch
- evidence items
- assessment
- narrative availability/failure notice when the LLM path degrades
- advisory summary
- share summary
- persisted report metadata
GET /api/v1/analysesSupports filtering by:
severityrecommendationsearchpagepage_size
GET /api/v1/analyses/{report_id}DeployWhisper includes a headless CLI entrypoint for local or CI usage.
python cli.py analyze path/to/plan.json path/to/deployment.yamlThe CLI prints structured JSON containing:
- intake status
- risk assessment
- advisory summary
- share summary
- persisted report metadata
python cli.py skillsThis shows built-in and custom skill override status.
DeployWhisper is configured primarily through environment variables and stored non-secret settings.
APP_NAMEAPP_VERSIONAPP_HOSTAPP_PORTLOG_LEVELDATABASE_URLTOPOLOGY_PATH
LLM_PROVIDERLLM_MODELLLM_API_BASELLM_API_KEYOPENAI_API_KEYANTHROPIC_API_KEYGEMINI_API_KEYGOOGLE_API_KEYOPENROUTER_API_KEYGROQ_API_KEYXAI_API_KEY
DeployWhisper is designed so that:
- raw IaC stays local
- sensitive files such as
.env,.pem,.key,id_rsa,credentials, and*.tfstateare excluded from unsafe downstream handling - provider API keys are not stored in the application database
- advisory results remain non-blocking in v1
DeployWhisper uses one shared analysis core with three access surfaces:
- Web UI via NiceGUI
- REST API via FastAPI
- CLI via
cli.py
Primary runtime components:
parsers/: tool-specific artifact parsinganalysis/: risk scoring, blast radius, rollback, and incident matchingservices/: orchestration, persistence, settings, and topology workflowsllm/: provider routing, prompts, skill context, and narrative generationui/: dashboard, history, incidents, and settings pagesapi/: versioned routes and schema envelopes
Key architectural constraints:
- local-first processing for raw artifacts
- advisory-only decision support in v1
- single-container deployment model
- stable API contract under
/api/v1 - persisted reports before presentation
For more detail, see _bmad-output/planning-artifacts/architecture.md.
Project documentation currently lives in a few places:
- PRD
- Architecture
- UX Specification
- Epics and Stories
- Implementation Readiness Report
- Evidence Model Foundation
- CI Guide
- CI Secrets Checklist
api/ FastAPI routes and schemas
analysis/ Risk scoring, blast radius, rollback, incident matching
cli/ Headless analysis commands
llm/ Narrative generation and skill context
models/ ORM tables and repositories
parsers/ Tool-specific parsers
services/ Shared orchestration and persistence logic
ui/ NiceGUI routes and components
tests/ API, CLI, parser, service, UI, and infra tests
Install dependencies:
pip install -r requirements.txtRun the app:
python app.pyRun the full test suite:
./.venv/bin/python -m unittest discover -qRun the local CI-equivalent checks:
bash scripts/ci-local.shRun the browser keyboard smoke for the review flow:
npm install
npm run test:ui-reviewRun the real macOS VoiceOver smoke on a GUI-enabled Mac:
npm run setup:ui-review
npm run test:ui-review:voiceoverGitHub Actions is configured in .github/workflows/ci.yml.
The published GitHub Marketplace action now lives in its own dedicated public
repository:
deploywhisper/analyze-action@v1.
Typical PR usage:
name: DeployWhisper
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
deploywhisper:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: deploywhisper/analyze-action@v1
with:
api-url: ${{ secrets.DEPLOYWHISPER_API_URL }}What the action does:
- detects changed files from the pull request diff
- filters to supported DeployWhisper artifacts locally before upload
- submits those artifacts to the existing
POST /api/v1/analysesendpoint - posts a single markdown PR comment and updates that same comment on re-runs
- compares the latest report with the previous PR scan so reruns show score and severity deltas in the refreshed comment
- exits
0when analysis succeeds, regardless of risk verdict - exposes outputs for follow-on GitHub steps:
report-idreport-link(shareable/reports/{id}URL)severityrecommendationshare-summary-jsonshare-summary-markdowncomment-idcomment-urlcomment-updated
Optional inputs:
api-token: bearer token for protected DeployWhisper APIschanged-files: override auto-detected PR files with a comma or newline separated listworking-directory: repository root when the checkout is not in.
Shared report links now resolve to /reports/{id} read-only pages. For sensitive
reports, configure password protection and opt-in file-name redaction via
POST /api/v1/analyses/{id}/share with the X-DeployWhisper-Share-Token
header. Set DEPLOYWHISPER_SHARE_TOKEN before exposing that management API.
When a prior scan exists for the same analyzed artifact set, the shared report
page also exposes a Compare with previous view with risk-score, findings, and
evidence deltas.
DeployWhisper also supports a GitHub App adapter for webhook-driven PR analysis,
checks API publishing, and self-hosted app setup. The App adapter is
documented in docs/github-app.md and is designed to
run in three lanes:
- Action-first: workflow file + Marketplace Action
- Advanced self-hosted GitHub App: webhook and checks against your own DeployWhisper server, with app creation and installation handled in GitHub's UI
- Combined mode: Action for explicit workflow control, self-hosted GitHub App for checks and webhook automation
The recommended open-source posture is Action-first. If you want GitHub App
capabilities, create a private/self-hosted GitHub App in your own account or
organization and point it at your own DeployWhisper instance. See
docs/github-app-self-hosted-setup.md.
Keep the DeployWhisper / Risk Analysis check advisory-only in GitHub branch
protection; do not add it as a required status check.
To scaffold this setup into another repository with a workflow file, README update, and optional self-hosted GitHub App notes, run:
deploywhisper github initExample:
curl -X POST http://127.0.0.1:8080/api/v1/analyses/17/share \
-H "Content-Type: application/json" \
-H "X-DeployWhisper-Share-Token: $DEPLOYWHISPER_SHARE_TOKEN" \
-d '{"password":"review-only","redact_filenames":true}'The app repository no longer carries Marketplace action packaging files. Action
source, release metadata, and consumer smoke verification live in the dedicated
deploywhisper/analyze-action repository.
Current CI stages:
qualitychanged-teststestreportnotify-failure
The CI pipeline is backend-focused and intentionally skips frontend-style burn-in loops because the current stack is Python unittest, not a flaky browser E2E suite.
For accessibility-sensitive UI changes, the repo also ships an opt-in macOS verification lane:
npm run test:ui-reviewexercises the seeded review flow with Playwright keyboard automation.npm run test:ui-review:voiceoverexercises the same flow with real VoiceOver on macOS afternpm run setup:ui-review.RUN_UI_A11Y=1 bash scripts/ci-local.shappends both lanes locally when Node dependencies are installed. The VoiceOver step auto-skips on non-macOS hosts.
Contributions are welcome.
If you want to contribute:
- Fork the repository
- Create a feature branch
- Run the local test suite
- Open a pull request with clear rationale and verification notes
High-value contribution areas:
- parser coverage and fixture quality
- risk scoring improvements
- incident-ingestion workflows
- topology-management UX
- authn/authz hardening
- documentation and examples
- Improve README examples for CLI and API usage
- Add realistic parser fixtures for one supported tool
- Strengthen docs around topology JSON and custom skill uploads
- Add screenshots or a short demo GIF once the UI flow is stable
- Expand CI or test coverage around one isolated service or route
- Keep changes scoped and reviewable
- Include verification notes
- Avoid checking in secrets, real infrastructure state, or sensitive sample files
- Prefer tests for behavior changes
- Prefer docs updates when behavior or setup changes
Use GitHub Issues for:
- bug reports
- parser edge cases
- feature requests
- documentation gaps
- architecture or API discussions before larger implementation work
For larger proposals, open an issue first so the implementation direction is discussed before code starts to drift.
DeployWhisper is being developed as an open-source project for DevOps and platform engineering teams.
The intended open-source value proposition is:
- transparent local-first deployment review
- self-hostable architecture
- extensible AI Skills model
- advisory-first automation for CI/CD workflows
If you adopt or extend the project, keep the current maturity in mind: the core analysis path exists, but some production-hardening work is still in progress.
The repository is best understood as:
- a working implementation foundation
- an evolving open-source DevOps tool
- a planning-backed project with explicit product and architecture artifacts
That means contributors can help in both code and product-facing areas:
- implementation
- tests
- docs
- examples
- UX captures
- integration ideas
- real screenshots and demo media
- more sample artifacts for supported tools
- contributor-facing repo metadata such as
LICENSE,CONTRIBUTING.md,CODE_OF_CONDUCT.md, andSECURITY.md - broader onboarding docs for first-time contributors
Near-term directions already visible in the repo and planning artifacts:
- stronger production hardening around auth, observability, and recovery
- richer CI/CD integration patterns
- more complete incident-ingestion and trend workflows
- better contribution and onboarding surfaces for open-source users
Current implementation state:
- foundation scaffold done.
- shared analysis pipeline is implemented
- API, UI, and CLI surfaces exist
- CI is in place
- broader production hardening is still underway
This repository should be treated as an evolving open-source platform project rather than a fully hardened production product.
