Rudel has been the missing “stats layer” for how I actually use AI to write code. Instead of guessing what helps or wastes tokens, I finally see real patterns: which prompts work, where AI speeds me up, and where I still need to step in. The fact that it’s free, open source, and self‑hostable makes it feel like a tool built for engineers first, not just another SaaS dashboard.
Rudel
hey all!
we built rudel because we were using claude code / codex every day, but had no idea what was actually happening across sessions.
which sessions worked?
which ones got abandoned?
where were tokens going?
were we getting better, or just spending more?
we launched a month ago and had great reception.
so we tried something more fun: spotify wrapped x fifa ultimate team cards for claude code and codex.
upload your sessions, and rudel gives you your ai coder card.
some examples:
roadrunner: fast, frequent, high-output sessions
tourist: light usage, lots of starts, low commitment
company card: high intensity, high spend, meh output
maniac: broad, consistent, intense usage across repos
adhd brain (me): lots of repos, mid to low throughput across
and 4 more...
the classifier runs on derived metadata like duration, token counts, model mix, repo count, and commit signals.
free and open source.
would love feedback on the cards, the archetypes, and what else you’d want to understand about your claude code / codex usage.
cheers!