
Cleq
Score how well your team guards AI-generated code
6 followers
Score how well your team guards AI-generated code
6 followers
AI Code Guardian is a GitHub App that scores every PR for security, performance, and quality. Track your team's Guardian Score to see who's catching AI mistakes. What we score: - PR Score (0-100) — How clean is this code? - Guardian Score (0-100) — How effective is this reviewer? - Team Health (0-100) — Is AI helping or hurting?







Thanks for focusing on guardrails. Running a compliance heavy b2b product, this is exactly the kind of visibility teams need as AI increases code output
@new_user___0332026dc92a2263d0185d0 thanks 🙏
The Guardian Score concept is brilliant—I've seen too many "LGTM" reviews that miss obvious issues. My biggest concern with scoring systems like this is calibration: how does it distinguish between a senior dev who catches subtle performance issues versus someone who nitpicks code style? Also, does the score account for false positives, or is a rejected suggestion counted as a negative even if the original code was actually correct?
Thanks @easytoolsdev Great questions!
On calibration: We weight by risk, reviews on high blast-radius, critical-path, or
AI-generated PRs count more.
We also detect trivial comments ("nit", "lgtm", "+1") and
apply a spam penalty when the majority of a reviewer's comments are low-value.
So substantive catches on risky PRs naturally score higher.
On false positives: Valid concern. Currently a rejected suggestion could hurt the
score. We're exploring ways to distinguish "considered but declined" from
"ignored"-likely by looking at whether there was discussion vs silence.