clawsec

clawsec

clawsec verifies AI agent skills before you install them

1 follower

clawsec verifies AI agent skills before you install them. Our multi-stage pipeline - YARA pattern detection, LLM semantic analysis, and false positive filtering - catches prompt injections, credential harvesting, and hidden malicious code. Paste a skill, get a trust score. No signup needed.
clawsec gallery image
clawsec gallery image
clawsec gallery image
clawsec gallery image
clawsec gallery image
Free
Launch Team / Built With
Anima - OnBrand Vibe Coding
Design-aware AI for modern product teams.
Promoted

What do you think? …

clawsec
Maker
📌
Hey Product Hunt! 👋 We built clawsec because we kept running into the same problem: AI agents are incredibly powerful, but they'll execute whatever skill file you give them - including malicious ones. Prompt injections, credential harvesting, hidden command execution… the attack surface for AI agents is wide open and growing fast. There's no "npm audit" for AI skills. So we built one. How it works: When you submit a skill, clawsec runs it through a 3-stage pipeline: 1) YARA pattern detection - 13 custom rules scan for known malicious patterns like prompt injection, code execution, and unicode steganography. Instant results. 2) LLM semantic analysis - an AI reads the full skill to catch sophisticated attacks that evade static rules. 3) False positive filter - a meta-analysis layer removes false alarms so you don't waste time chasing safe code. You get a trust score (0–100), a clear risk level, and a detailed report showing exactly what was found and where. We're also working on Stage 4: sandboxed execution - actually running skills in isolated containers to catch runtime-only threats. No signup required. Paste a skill URL or raw text, get your results in under 30 seconds, and they stay available for 24 hours. We'd love your feedback - what threats are you most worried about with AI agents? What would make this more useful for your workflow? Try it at clawsec.dev 🦞