Launching today
roast-my-code
AI that scores and roasts your codebase for AI slop patterns
2 followers
AI that scores and roasts your codebase for AI slop patterns
2 followers
roast-my-code is a CLI that scans your repo for AI-generated code smell patterns — TODOs, placeholder variable names (foo/bar/temp), empty exception handlers, commented-out blocks — and scores it 0–100. It then calls an LLM (Groq free tier, $0 to run) to generate a brutal roast referencing your actual file names and worst offenders. Unlike pylint or flake8, it specifically targets what AI coding assistants leave behind. Exports a shareable HTML report + badge.


Hey PH!
I built roast-my-code after getting sick of AI-generated PRs slipping
through code review. TODOs everywhere, functions named "do_stuff",
empty except blocks — you know the ones.
The tool runs $0 by default (Groq free tier). Just:
pip install roast-my-code
roast ./your-repo
I ran it on the Linux kernel — 67/100. On my own repo — 78/100.
On a colleague's legacy codebase — 12/100. They asked me not to share it.
Would love to hear what AI slop patterns you've spotted that I should
add to the analyzer. Drop them in the comments