We take the security of our AI tools seriously. If you believe you have found a security vulnerability in any of the my-ai-stack repositories, please report it to us as described below.
Please do not report security vulnerabilities through public GitHub issues.
Instead, please report them via email to: security@stack-ai.me
You should receive a response within 48 hours. If for some reason you do not, please follow up via email to ensure we received your original message.
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
- Type of issue (e.g., sql injection, cross-site scripting, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit the issue
We prefer all communications to be in English.
- We will acknowledge your report within 48 hours
- We will investigate and provide an initial assessment within 72 hours
- We will maintain confidentiality of your report
- We will credit you for the discovery (unless you prefer to remain anonymous)
- We follow responsible disclosure principles
This security policy applies to:
- All repositories under the my-ai-stack organization
- All released versions of the software
- Any services hosted by my-ai-stack
The following are considered out of scope:
- Issues in third-party dependencies (please report to the respective project)
- Social engineering attacks
- Physical attacks
- Denial of service attacks (unless they reveal a specific vulnerability)
- Issues requiring physical access to a device
Given the nature of our AI tools, please also consider:
- Prompt Injection: Attempts to bypass safety measures through crafted inputs
- Data Leakage: Unintended exposure of training data or user inputs
- Model Manipulation: Attempts to modify model behavior maliciously
- API Key Exposure: Accidental commit of API keys or credentials
If you discover any of these issues, please report them immediately.
We appreciate your efforts to responsibly disclose your findings and will acknowledge your contributions to our security.