Launched this week
SkillShield
Security-scored directory for AI skills and agent tools
40 followers
Security-scored directory for AI skills and agent tools
40 followers
The first security-scored directory for AI skills. Scan GitHub/GitLab repos with SKILL.md files through 4-layer security analysis: manifest, static code, dependency, and LLM behavioral checks. Get 0-100 trust scores, real-time vulnerability detection, and security badges. 8,890+ skills scanned, 6,300+ findings identified. Part of The Red Council security suite. Discover trusted AI capabilities or validate your own.





Interesting - I launched skillshield.dev on Feb 6 with the same concept. Would love to chat about how our approaches differ...
@charlescsturt Sure would love to. Great website too
@vouchy Each skill gets 4 layers of scanning which are listed in the skill report link :
- Manifest Analysis
- Static code analysis
- dependency graph
- LLM Behaviour safety
Then it gets scroed accordingly and if there are any detections, it will be listed within the report. This way there is complete transparency.
Also for each scan gets label to the commit hash so if it changes later it can get updated.
Example : https://skillshield.io/report/25f85f43360e9163
ZeroThreat.ai
Hey @sherif_kozman
A visible trust score + layered scanning (especially behavioral LLM testing) is a smart way to make security actionable instead of abstract.
We’re seeing a similar shift at ZeroThreat.ai Automation is scaling fast, and security needs to be embedded, not bolted on later.
Would like to know how are you validating business logic abuse or privilege escalation between chained skills? That’s usually where things get tricky.
@sarrah_pitaliya Happy to share. Also each skill has its own dedicated report page with findings and different scans processed