Security Integration Proposal
Hey CrewAI team — I built ClawMoat, open-source runtime security for AI agents (npm, MIT, zero deps).
After watching RSAC 2026 (live exploitation demos of every major AI agent platform) and incidents like the LiteLLM supply chain attack, I think crew-based agents need a security layer between task execution steps.
The gap
CrewAI agents hand off tasks between agents in a pipeline. Each handoff is a potential injection point — a malicious output from one agent becomes a malicious input to the next. ClawMoat can intercept at each step.
Proposed: ClawMoat Crew Task Guard
from clawmoat.integrations.crewai import ClawMoatTaskGuard
# Scan all task inputs/outputs automatically
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
security=ClawMoatTaskGuard(policy="strict")
)
Open to building this as a contributed integration or keeping it as a ClawMoat-side package.
Would love to discuss the right approach: https://github.com/darfaz/clawmoat
Security Integration Proposal
Hey CrewAI team — I built ClawMoat, open-source runtime security for AI agents (npm, MIT, zero deps).
After watching RSAC 2026 (live exploitation demos of every major AI agent platform) and incidents like the LiteLLM supply chain attack, I think crew-based agents need a security layer between task execution steps.
The gap
CrewAI agents hand off tasks between agents in a pipeline. Each handoff is a potential injection point — a malicious output from one agent becomes a malicious input to the next. ClawMoat can intercept at each step.
Proposed: ClawMoat Crew Task Guard
Open to building this as a contributed integration or keeping it as a ClawMoat-side package.
Would love to discuss the right approach: https://github.com/darfaz/clawmoat