Launched this week
SkillShield

SkillShield

Security-scored directory for AI skills and agent tools

35 followers

The first security-scored directory for AI skills. Scan GitHub/GitLab repos with SKILL.md files through 4-layer security analysis: manifest, static code, dependency, and LLM behavioral checks. Get 0-100 trust scores, real-time vulnerability detection, and security badges. 8,890+ skills scanned, 6,300+ findings identified. Part of The Red Council security suite. Discover trusted AI capabilities or validate your own.
SkillShield gallery image
SkillShield gallery image
SkillShield gallery image
Free
Launch Team / Built With
Anima Playground
AI with an Eye for Design
Promoted

What do you think? …

Sherif Kozman
Maker
šŸ“Œ
Hey Product Hunt! šŸ‘‹ I'm excited to launch SkillShield - the security-scored directory for AI skills. **The Problem:** As AI agents become more powerful, they're being given access to external tools and "skills" - but how do you know if those skills are safe? A malicious or vulnerable skill could leak data, expose APIs, or worse. **The Solution:** SkillShield scans AI skill repositories (SKILL.md files) through 4 security layers: - Manifest analysis - Static code analysis - Dependency graph checking - LLM behavioral safety testing Each skill gets a 0-100 trust score, making it easy to identify safe capabilities. **What's Live:** āœ… 8,890+ skills already scanned āœ… Real-time vulnerability detection āœ… Security badge generation āœ… Filter by trust score, findings, and category āœ… Part of The Red Council security suite (165+ attack patterns) **Why Now:** With Claude's Computer Use, OpenAI's function calling, and the explosion of AI agent frameworks, we need security standards before things break at scale. I'd love your feedback! What security features would make you trust an AI skill?
Van de Vouchy
Hey Sherif, that question of how do you know if a skill is safe is something most people probably don’t think about until it’s too late. Was there a specific moment where you looked at an AI skill or tool and thought wait, I have no idea what this is actually doing under the hood?
Sherif Kozman

@vouchyĀ Each skill gets 4 layers of scanning which are listed in the skill report link :

- Manifest Analysis
- Static code analysis
- dependency graph
- LLM Behaviour safety

Then it gets scroed accordingly and if there are any detections, it will be listed within the report. This way there is complete transparency.

Also for each scan gets label to the commit hash so if it changes later it can get updated.

Example : https://skillshield.io/report/25f85f43360e9163

Charles Sturt

Interesting - I launched skillshield.dev on Feb 6 with the same concept. Would love to chat about how our approaches differ...

Sherif Kozman

@charlescsturtĀ Sure would love to. Great website too

Sarrah Pitaliya

Hey @sherif_kozman

A visible trust score + layered scanning (especially behavioral LLM testing) is a smart way to make security actionable instead of abstract.

We’re seeing a similar shift at ZeroThreat.ai Automation is scaling fast, and security needs to be embedded, not bolted on later.

Would like to know how are you validating business logic abuse or privilege escalation between chained skills? That’s usually where things get tricky.