Launched this week
SkillShield
Security-scored directory for AI skills and agent tools
35 followers
Security-scored directory for AI skills and agent tools
35 followers
The first security-scored directory for AI skills. Scan GitHub/GitLab repos with SKILL.md files through 4-layer security analysis: manifest, static code, dependency, and LLM behavioral checks. Get 0-100 trust scores, real-time vulnerability detection, and security badges. 8,890+ skills scanned, 6,300+ findings identified. Part of The Red Council security suite. Discover trusted AI capabilities or validate your own.





@vouchyĀ Each skill gets 4 layers of scanning which are listed in the skill report link :
- Manifest Analysis
- Static code analysis
- dependency graph
- LLM Behaviour safety
Then it gets scroed accordingly and if there are any detections, it will be listed within the report. This way there is complete transparency.
Also for each scan gets label to the commit hash so if it changes later it can get updated.
Example : https://skillshield.io/report/25f85f43360e9163
Interesting - I launched skillshield.dev on Feb 6 with the same concept. Would love to chat about how our approaches differ...
@charlescsturtĀ Sure would love to. Great website too
ZeroThreat.ai
Hey @sherif_kozman
A visible trust score + layered scanning (especially behavioral LLM testing) is a smart way to make security actionable instead of abstract.
Weāre seeing a similar shift at ZeroThreat.ai Automation is scaling fast, and security needs to be embedded, not bolted on later.
Would like to know how are you validating business logic abuse or privilege escalation between chained skills? Thatās usually where things get tricky.