fmerian

Skill Inspector - Audit your AI agent skills to avoid malware

by
Skill Inspector helps you analyze and understand the capabilities, risks, and behaviors of AI skills before they reach production. It inspects how skills are defined, what tools and permissions they rely on, and how they behave across different scenarios. Whether you're building copilots or AI-powered apps, Skill Inspector gives you the visibility and confidence to ship AI safely. Identify risky patterns, validate skill behavior, and ensure your AI does exactly what you expect - no surprises.

Add a comment

Replies

Best
Liran Tal
Maker
📌
Why we built Skill Inspector: AI apps are increasingly powered by "skills" - reusable units that define what an AI can do, what tools it can access, and how it behaves. But as teams build and share more skills, a new problem emerges: we don’t actually know what those skills are capable of. Developers and security teams kept asking: - "What does this skill really do under the hood?" - "What tools or data can it access?" - "Is it safe to plug into our AI system?" The reality is that most teams are operating on trust, not visibility. And that’s a risky place to be. What we’re solving: 1. Making AI skills transparent - Skill Inspector breaks open the black box of AI skills. It analyzes their definitions, instructions, and tool usage so you can clearly understand what capabilities they expose. 2. Identifying risky or unexpected behavior - From overly broad permissions to ambiguous instructions, Skill Inspector helps surface patterns that could lead to misuse, data leakage, or unintended actions. 3. Bringing governance to AI capabilities - As skills become building blocks for AI systems, they need the same level of scrutiny as code dependencies. Skill Inspector helps teams validate and review skills before they’re used in production. What to try today: - Paste or load a skill into Skill Inspector and explore its analysis - Review how it interprets the skill’s instructions and capabilities - Look for flagged risks or unclear behaviors in the output We’d love your feedback on how useful the analysis is for understanding skill behavior, what kinds of risks or insights you’d want surfaced, and how this could fit into your AI development or review workflow. Try comparing multiple skills to see how their behavior differs then go ahead and integrate Snyk Agent Scan CLI into your CI, hooks, and other security integration points: https://github.com/snyk/agent-scan Thanks for checking out Skill Inspector. We're excited to help bring more clarity and control to how AI systems are built 🚀
Whetlan

@liran_tal Honest question, does this catch skills that look fine statically but fetch remote configs or schemas at runtime? That's been the scarier case for me. The manifest says it reads from a local file, then at runtime it pulls a URL you never reviewed.

Liran Tal

@whetlan the work on the Skill Inspector app is based on very extensive data benchmark and tuning we ran back in February for the ToxicSkills research. It's available here for review: https://github.com/snyk/agent-scan/blob/main/.github/reports/skills-report.pdf

You can go ahead and test the skill audit right there from the interactive and free web app for Skill Inspector and if you find that it doesn't catch the agent skill instructions that you're concerned about. If we don't flag anything there and you can share the skill package with me please do, I'll add it to our dataset so we can add and improve it :)