Yesterday was a busy day at CodeReviewr Claude Opus 4.5 is live! We shipped Claude Opus 4.5 integration (and holy hell, the code analysis depth is next-level). While we were at it, we built a model-swap system adding new frontier models is now a one-button deploy. No more waiting to test the latest from Anthropic, OpenAI, or anyone else. Real-time package vulnerability scanning Thanks to a user who pinged us about Sha1-Hulud, a massive NPM supply chain attack hitting hundreds of packages, we dropped everything and built a package advisory system. Starting today, every PR gets scanned against known vulnerabilities before it hits your main branch. No more accidentally merging compromised dependencies. No more "wait, when did Lodash get flagged?" moments three months later. Just instant alerts when something in your package.json is sus. This is the kind of thing that should be standard in code review tools from day one. Supply chain attacks aren't edge cases anymore. Both features went live at https://codereviewr.app this morning. Still charging per token, not per developer. Still no subscription. Still building in public. hashtag#buildinpublic hashtag#codereview hashtag#ai hashtag#anthropic hashtag#malware hashtag#features
When Opus 4 was launched, it was quite a leap, in my opinion. That's why I'm interested to hear your real impressions of the newest Opus 4.5 by @Claude by Anthropic.
Have you noticed any improvements in your workflow?
Are there any annoying features like the notorious limits for Opus 4 in Claude Code?
We're adding Insights to CodeReviewr a static analyzer that maps your codebase health before AI reviews even start. https://vimeo.com/1136120639?share=copy&fl=sv&fe=ci
What you'll see:
Cyclomatic complexity and file metrics
Dependency graphs (fan-in/fan-out, circular deps)
Unused exports and isolated files
Issue hotspots by severity and risk score
That's useful on its own. But here's where it gets interesting.