We release patches for security vulnerabilities. Currently supported versions:
| Version | Supported |
|---|---|
| 0.1.x | ✅ |
| < 0.1 | ❌ |
We take security vulnerabilities seriously. If you discover a security issue in Weft, please report it responsibly:
- Do NOT open a public GitHub issue for security vulnerabilities
- Use GitHub's Security Advisory feature:
- Go to the Security tab
- Click "Report a vulnerability"
- Fill out the private security advisory form
- Or email directly (if you prefer): security@weftlabs.com
To help us triage and fix the issue quickly, please include:
- Description of the vulnerability
- Steps to reproduce the issue
- Affected versions of Weft
- Potential impact (e.g., data exposure, code execution)
- Suggested fix (if you have one)
- Your contact information (for follow-up questions)
- Initial response: Within 48 hours
- Status update: Within 7 days
- Fix timeline: Depends on severity
- Critical: Patch within 7 days
- High: Patch within 30 days
- Medium/Low: Addressed in next planned release
- Please give us reasonable time to fix the issue before public disclosure
- We will credit you in the security advisory (unless you prefer to remain anonymous)
- Coordinated disclosure: We'll work with you on timing of public announcements
When using Weft, follow these security guidelines:
- Never commit API keys to version control
- Use environment variables: Store
ANTHROPIC_API_KEYin.envfile (add to.gitignore) - Use dotenv files: Keep
.envoutside of Git repositories - Rotate keys regularly: Change API keys every 90 days as a best practice
- Always review AI outputs before accepting (human-in-the-loop is mandatory)
- Check for security vulnerabilities: SQL injection, XSS, command injection, etc.
- Validate input handling: Ensure generated code sanitizes user input
- Review dependencies: Check if AI suggests insecure or outdated packages
- Avoid passing secrets to AI agents in prompts
- Don't include PII: Keep user data and personally identifiable information out of prompts
- Review logs: AI history contains all prompts - ensure no sensitive data leaked
- Use read-only mounts: Docker configurations mount code repositories as read-only
- Use read-only mounts for code repositories (default configuration)
- Run as non-root: Container users should not have root privileges
- Keep images updated: Regularly pull latest base images
- Limit network access: Watchers only need API access, not full internet
- Validate AI-generated commits: Review all code before merging to main branch
- Use feature branches: Worktrees isolate experimental code
- Sign commits (optional): Consider GPG signing for added verification
- Audit AI history repo: Regularly review AI history for unexpected changes
- Use lockfile: Install from
requirements-lock.txtfor reproducible builds - Run security scans: Use tools like
pip-auditorsafetyto check for CVEs - Keep dependencies updated: Update regularly but test thoroughly
- Review transitive deps: AI agents may suggest packages with vulnerable dependencies
- No sandbox execution: AI-generated code is not run in a sandbox during review
- Code quality varies: AI may generate insecure patterns (SQL injection, XSS, etc.)
- Dependency suggestions: AI may suggest outdated or vulnerable packages
- Mitigation: Mandatory human review before accepting any AI outputs
- Full feature specification: Agent 00 receives your entire feature description
- Previous outputs: Subsequent agents receive outputs from earlier agents
- Code context: Integration agent may receive existing code patterns
- Mitigation: Avoid including secrets, PII, or proprietary algorithms in prompts
- Trust assumption: Git operations assume trusted repository (no signature verification)
- Merge conflicts: Manual resolution required - potential for mistakes
- Worktree isolation: Features are isolated but share git object database
- Mitigation: Review all merges carefully, use protected branches
- Claude API calls: All AI processing goes through Anthropic's API
- Retry logic: Failed requests are retried automatically (may expose patterns)
- Rate limiting: Excessive requests may trigger rate limits
- Mitigation: API keys are never logged, use environment variables only
- Shared volumes: AI history repo is mounted read-write for watchers
- File permissions: Watchers write to filesystem (but only in designated directories)
- Container escape: Docker misconfigurations could allow container escape
- Mitigation: Use provided Docker Compose files, don't modify security settings
- Add bandit SAST scanning to CI/CD
- Implement dependency vulnerability scanning (Snyk or Dependabot)
- Add security policy enforcement in pre-commit hooks
- Sandboxed code execution for AI-generated code review
- Static analysis of AI outputs before application
- Signature verification for AI history commits
- Local LLM support (removes API dependency)
- End-to-end encryption for AI history
- Fine-grained access controls for multi-user deployments
- Audit logging with tamper-proof trail
For urgent security issues, contact:
- Email: security@weftlabs.com
- GPG Key: [TBD]
For general security questions, open a discussion in GitHub Discussions.
Last Updated: 2025-12-31 Security Policy Version: 1.0