cubic 2.0 - Code reviews for the AI era
by•
Over the past few months, we've been completely rebuilding cubic's AI review engine. Today we're excited to announce cubic 2.0, the most accurate AI code reviewer available.
cubic helps teams read, trust, and merge AI-generated code in real repos. It is optimized for accuracy and low noise, and it goes beyond PR comments with a CLI, AI docs, and PR description updates.
Used by 100+ orgs including Cal.com, n8n, Granola, and Linux Foundation projects.



Replies
cubic
Hey Hunters, I’m Paul, the founder of cubic.
If you’ve tried AI code review tools before, you’ve probably seen both failure modes:
1. they miss the important stuff
2. they comment so much that you stop reading
We built cubic because review is now the bottleneck. AI made it easy to produce code. It did not make it easy to trust a big diff in a complex repo.
Over the last few months we’ve been iterating hard on the engine, and the change is big enough that we’re calling it cubic 2.0. It’s faster, more accurate, and noticeably less noisy than it was a few months ago.
The other thing we learned is that “a GitHub bot that comments on PRs” is not enough anymore. Review is a workflow, not a feature, so we built the pieces around it too:
- incremental checks on every push
- PR descriptions that stay accurate
- wiki docs that stay in sync
- `cubic.yaml` for config-as-code
- and a CLI so you can run review before you push
If you try it, I’d love blunt feedback:
- What did it catch that you actually cared about?
- What should it stop commenting on?
I’ll be here in the comments!
cubic
@thekidsprinkles Thanks!
Framing review as a workflow, not just a PR bot, really resonates. Curious which piece ends up being most valuable in practice. The incremental checks, the CLI, or the config-as-code?
The focus on accuracy over noise makes sense—most AI reviewers I've seen lean too far in one direction. I'm curious how cubic handles codebases with mixed AI-generated and human-written code. Does it adjust review depth based on the origin of the code, or treat all changes uniformly?
Wow, cubic looks amazing! The updated AI review engine sounds like a game changer. How does it handle reviewing auto-generated code, specifically, to avoid reinforcing potential biases? Super keen to try this out!
Cool project. And I can see how you could easily extend it: add the ability to run full technical audits. That is, you connect Git, it audits the entire project and generates a report with recommendations divided into three groups: critical, standard, and minor. If this report is high quality, I think you’ll have a huge number of clients!
Tried out Cubic and like the idea of AI assisting with PR reviews. How do you make sure the feedback stays high-signal and doesn’t turn into noise, especially for experienced dev teams already using tools like Copilot?
Curious to hear how you’re thinking about this in real-world workflows.