AI-powered Docker security scanner that explains vulnerabilities in plain English
Quick Start • Features • Installation • Usage • Contributing • Changelog
🏆 Officially recognized as an OWASP Incubator Project
Trusted by the global security community • 14,000+ downloads
DockSec is an OWASP Incubator Project that bridges the gap between complex security scan results and actionable developer fixes. It integrates industry-standard scanners (Trivy, Hadolint, Docker Scout) with advanced AI to provide context-aware security analysis.
Instead of overwhelming you with a list of 200+ CVEs, DockSec:
- Prioritizes what actually affects your specific container setup.
- Explains vulnerabilities in plain English, not just security jargon.
- Suggests specific, line-by-line fixes for your Dockerfile.
- Generates professional, interactive security reports for your team.
Think of it as having a security expert sitting right next to you, reviewing your Dockerfiles in real-time.
Being recognized as an OWASP Incubator Project means:
- ✅ Vetted by security professionals for quality and impact.
- ✅ Community-driven development and open governance.
- ✅ Trusted by enterprises and security teams worldwide.
- ✅ Transparent security practices and open-source maintenance.
DockSec follows a robust four-stage pipeline:
- Scan: Runs Trivy, Hadolint, and Docker Scout locally on your environment.
- Analyze: AI correlates findings across all scanners to remove noise and assess real-world impact.
- Recommend: Generates human-readable explanations and specific remediation steps.
- Report: Exports actionable results in JSON, PDF, HTML, or Markdown formats.
# Install DockSec
pip install docksec
# Scan a Dockerfile (AI-powered)
docksec Dockerfile
# Scan Dockerfile + Docker image
docksec Dockerfile -i myapp:latest
# Scan without AI (offline mode, no API key needed)
docksec Dockerfile --scan-only- Smart Analysis: AI explains what vulnerabilities mean for your specific setup.
- Multi-LLM Support: Use OpenAI, Anthropic Claude, Google Gemini, or local models via Ollama.
- Deep Integration: Combines Trivy (vulnerabilities), Hadolint (linting), and Docker Scout.
- Security Scoring: Get a 0-100 score to track your security posture over time.
- Rich Reporting: Professional exports in HTML (interactive), PDF, JSON, and CSV.
- Privacy First: All scanning happens locally. Only scan metadata is sent to AI providers.
- CI/CD Ready: Designed for easy integration into GitHub Actions and build pipelines.
Requires Python 3.12+ and Docker (for image scanning).
pip install docksecChoose your preferred LLM provider by setting the appropriate environment variable:
export OPENAI_API_KEY="your-key-here"export ANTHROPIC_API_KEY="your-key-here"
export LLM_PROVIDER="anthropic"
export LLM_MODEL="claude-3-5-sonnet-20241022"export GOOGLE_API_KEY="your-key-here"
export LLM_PROVIDER="google"
export LLM_MODEL="gemini-1.5-pro"# Install Ollama from https://ollama.ai
export LLM_PROVIDER="ollama"
export LLM_MODEL="llama3.1"To enable full vulnerability and linting support:
# Automatically install Trivy and Hadolint
python -m docksec.setup_external_tools# Basic Dockerfile analysis
docksec Dockerfile
# Full analysis (Dockerfile + Image)
docksec Dockerfile -i nginx:latest
# Image-only scan (no Dockerfile needed)
docksec --image-only -i nginx:latest
# Use a specific AI model
docksec Dockerfile --provider anthropic --model claude-3-5-sonnet-20241022
# Save report to a custom path
docksec Dockerfile -o my_report.html| Option | Description |
|---|---|
dockerfile |
Path to the Dockerfile to analyze |
-i, --image |
Docker image name to scan |
-o, --output |
Custom output file path |
--provider |
LLM provider (openai, anthropic, google, ollama) |
--model |
Specific model name to use |
--ai-only |
Run AI analysis only (requires Dockerfile) |
--scan-only |
Run security scanners only (no AI) |
--image-only |
Scan image without Dockerfile analysis |
--version |
Show version information |
You can customize DockSec via environment variables or a .env file:
# LLM Settings
LLM_PROVIDER=openai # openai, anthropic, google, ollama
LLM_MODEL=gpt-4o # Model name
LLM_TEMPERATURE=0.0 # Creativity (0.0 recommended for security)
# Results & Timeouts
DOCKSEC_RESULTS_DIR=./results # Where to save reports
TRIVY_SCAN_TIMEOUT=600 # Timeout for image scans🔍 Scanning Dockerfile...
⚠️ Security Score: 45/100
Critical Issues (3):
• Running as root user (line 12)
• Hardcoded API key detected (line 23)
• Using vulnerable base image (ubuntu:20.04)
💡 AI Recommendations:
1. Add non-root user: RUN useradd -m appuser && USER appuser
2. Move secrets to environment variables or build secrets.
3. Update FROM ubuntu:20.04 to ubuntu:22.04 (fixes 12 CVEs).
📊 Full report generated: results/nginx_latest_security_report.html
- Multi-LLM support (OpenAI, Anthropic, Google, Ollama)
- Professional HTML/PDF report generation
- Docker Compose multi-service scanning
- Kubernetes manifest analysis
- GitHub Action for automated PR reviews
- Custom security policy enforcement
"No API Key provided"
→ Set your API key (e.g., OPENAI_API_KEY) or use --scan-only mode.
"Hadolint/Trivy not found"
→ Run python -m docksec.setup_external_tools to install them automatically.
"Python version not supported"
→ DockSec requires Python 3.12+. We recommend using pyenv or conda to manage versions.
"Connection refused" with Ollama
→ Ensure the Ollama daemon is running (ollama serve) and you have pulled the model (ollama pull llama3.1).
DockSec is proud to be an OWASP Incubator Project. Our mission is to make container security accessible, understandable, and actionable for every developer.
- OWASP Project Page: owasp.org/www-project-docksec/
- PyPI: pypi.org/project/docksec/
- Issues: Report a bug
- Discussions: Join the community
Built with ❤️ by Advait Patel and the OWASP community.
