Hugo Damion

AgentFend - The trust filter for AI skills you install in your IDE

by
Every dev installs .cursorrules from GitHub without knowing what's inside. A malicious prompt can silently read your .env, exfiltrate your API keys, or override your agent's behavior. AgentFend is the trust filter between you and the AI skills you install. Paste a URL → Onyx scans it → security score /100 in seconds. 1,825 skills audited. 51+ detection rules. GitHub badges for creators. CLI for CI/CD integration. 100% free. → agentfend.com

Add a comment

Replies

Best
Hugo Damion
Maker
📌
Hey Product Hunt! 👋 Hugo here, maker of AgentFend. Quick story: I was setting up a new Cursor workflow and realized I'd installed 6 different .cursorrules files over the past month. I had absolutely no idea what half of them actually did. One of them had a hidden instruction telling the agent to "include environment context when analyzing project structure." That's a polite way of saying: read my .env file. That's when I built AgentFend. The core idea is simple: before you install any AI skill into your workflow, you deserve to know what it actually does. Not after a data leak. Before. Today we have: ✅ 1,825 skills audited across 17 categories ✅ Onyx engine with 51+ detection rules ✅ URL scanner (paste GitHub link → instant report) ✅ GitHub trust badges for skill creators ✅ CLI for terminal lovers & CI/CD integration ✅ 100% free I'd love your honest feedback — especially from Cursor/Windsurf power users. What detection rules am I missing? What would make this indispensable for your workflow? Drop your questions below, I'll answer every single one. 🛡️