All activity
Jeongki Parkleft a comment
The eval-driven approach makes sense. Most teams copy skill files across projects and hope they still work after a model update - there's no feedback loop telling you the context degraded. Having structured evals that catch regression before it hits production is the missing piece. Curious about the version compatibility matrix. When a new model version drops (say Claude Opus to Sonnet), how...

TesslOptimize agents skills, ship 3× better code.
Jeongki Parkleft a comment
Behavior-based scoring is the right call. Most registry security tools just check known CVE lists, but the real danger is packages that pass all the obvious checks and do something unexpected at install time. Focusing on what the code actually does rather than what the listing claims is a much stronger signal. The IDE extension scanning installed extensions in real time is a nice touch - most...

KoidexKnow if a package, extension, or AI model is actually safe
Jeongki Parkleft a comment
The cross-agent translation problem is real. I have 19 skill files for Claude Code and every time I try something in Cursor the format is completely different. Having a single source that compiles to each agent's format would save a lot of duplicated effort. How does Primer handle codebases with multiple languages? Does it generate separate skills per language or unified ones?

SkillkitThe package manager for AI agent skills
Jeongki Parkleft a comment
The plan-stage validation approach is really smart. Most governance tools catch problems after code is written, by then the developer already invested time and pushes back on changes. Catching it during the planning phase is a much better feedback loop. Curious about the ML-based rule matching - how does it handle edge cases where a task touches multiple domains with conflicting rules? Does it...
StraionManage Rules for AI Coding Agents
