Launching today

StackLint
Top 10 fixes for your repo, not a 500-warning pile
2 followers
Top 10 fixes for your repo, not a 500-warning pile
2 followers
Most code scanners dump 500 warnings per repo, forcing devs to triage what to fix first. Stacklint inverts this: paste a GitHub or GitLab URL, four lenses run in parallel (vulnerable deps, outdated majors, untested zones, duplication), and you get back the top 10 fixes ranked by impact. A grade A-F and an embeddable README badge ship with every scan.







Hey Product Hunt,
Every codebase audit I did at work ended the same way. Static analyzers surface 500 warnings. Three get fixed. The rest become noise. "Too much information" is not a feature.
Senior SWE from France. Built solo on evenings and weekends, shipped two weeks ago. The bet is the opposite of a dashboard. You paste a public GitHub or GitLab URL, a shallow clone runs four lenses in parallel on the server (vulnerable deps via OSV, outdated majors, untested code regions, non-trivial duplication), and you get back the top 10 fixes worth doing this week, ranked by severity and type-weight. The list is short enough to actually ship this sprint.
Each scan also ships:
- A grade A to F across four pillars (security 40, maintenance 20, testing 25, duplication 15). Formula, per-issue weights, and anti-gaming rules are documented on stacklint.app/scoring. Think of the grade as the plan compressed to one character.
- A shields.io-style SVG badge you can embed in your README right from the result page.
Anonymous scans need no signup. Source is never persisted, only findings metadata. Free tier covers one repo per account, manual scan on demand, and a weekly automated re-scan. Node.js and TypeScript ecosystems are the primary target today.
What it does not do yet, so nobody is disappointed on arrival:
- Custom rule authoring
- PR bots or auto-fix codemods
- SBOM or full supply-chain graph
- Continuous monitoring beyond the weekly rescan
A few design choices worth calling out, because they shape how the grade behaves: pinning a vulnerable version does not silence OSV. Splitting a file does not remove untested-zone findings. Near-duplicate detection is identifier-normalized, so renaming variables does not hide a clone. Each pillar is capped, so a single-axis failure never bottoms the whole score. The full set of invariants, with per-issue weights, lives at stacklint.app/scoring. Two things I'd especially like feedback on:
1. Are the four-pillar weights (40/20/25/15) defensible, or arbitrary?
2. Would you actually embed the badge on a repo you maintain?
Try it: https://stacklint.app/analyze
Happy to answer questions today.