Garry Tan

cubic - Cursor for code review

cubic is an AI-powered code review platform that automatically reviews PRs and gives human reviewers superpowers. It’s tool of choice for fast-moving teams like cal.com and n8n.

Add a comment

Replies

Best
Paul Sanglé-Ferrière

Hey Product Hunt, Paul and Allis here – founders of cubic (formerly mrge)! 👋

We’re building cubic – an AI code review platform to help teams ship code faster with fewer bugs. Our early users include Cal.com, n8n, and Better Auth—teams that handle a lot of PRs every day.

🚀 See it in action for cal.com here

We’re engineers who faced this problem when we worked together at our last startup. Code review quickly became our biggest bottleneck and quality tanked — especially as we started using AI to code more.

We had more PRs to review, subtle AI-written bugs slipped, and we (humans) found ourselves rubber-stamping PRs without deeply understanding the changes.

👷 We’re building cubic to help solve that. Here’s how it works:

  1. Connect your GitHub repo via our Github app in two clicks (and optionally download our desktop app).

  2. AI review: When you open a PR, our AI reviews your changes directly in a secure container. It has context into not just that PR, but your whole codebase, so it can pick up patterns and leave comments directly on changed lines. Once the review is done, the sandbox is torn down and your code deleted.

  3. Human-friendly review workflow: Jump into our web (or desktop) app (it’s like Linear but for PRs). Changes are grouped logically (not alphabetically), with important diffs highlighted, visualized, and ready for faster human review.


💻 The AI reviewer works a bit like Cursor in the sense that it navigates your codebase using the same tools a developer would—like jumping to definitions or grepping through code.

⚡️ The platform itself focuses entirely on making *human* code reviews easier. A big inspiration came from productivity-focused apps like Linear or Superhuman, products that show just how much thoughtful design can impact everyday workflows. We wanted to bring that same feeling into code review.

That’s one reason we built a desktop app. It allowed us to deliver a more polished experience, complete with keyboard shortcuts and a snappy interface.

We think the future of coding isn’t about AI replacing humans—it’s about giving us better tools to quickly understand high-level changes, abstracting more and more of the code itself. As code volume continues to increase, this shift is going to become increasingly important.

🚀 cubic is free for 2 weeks. You also get 50% off for 2 months with the code: PHUNT

Just sign up with your Github account to get started!

Looking forward to your feedback—fire away!

Rahul Singh

@paul_sangle_ferriere1 Kudos! I was wondering if it can be used for reviewing opinionated code style? Maybe use existing cursor/trae rules?

Paul Sanglé-Ferrière

@rahool_lol Hey Rahul, thanks for the kind words!


Yep, that’s exactly what our custom rules are for. When you set up your account, we suggest a bunch of rules you can use—either based on your existing Cursor/Trae rules or even your team’s past comments in the repo. You can tweak them to match your style, so you’re not stuck with only generic linting.

Merlin Kafka

Congrats on launching mrge, Paul and Allis 🎉 This looks like a smart solution to a growing problem, esp with the rise of Cursor, Windsurf etc. The codebase-aware reviews and logical grouping of changes sound particularly useful. Looking forward to trying this out on our repos!

Paul Sanglé-Ferrière

Thanks @merlin_k , great to hear that.


Feel free to email / DM if you have any feedback!

Neel Patel 🦕

This can be a huge time saver!

Paul Sanglé-Ferrière

Thanks Neel, great to hear it's resonating!

Vic Hu
Awesome idea! Does it support human-in-the-loop to escalate up when the confidence score is low on certain inference (e.g. feature vs. bug)? Does it support compliance suggestions such as accessibility, security, privacy, etc?
Paul Sanglé-Ferrière

Hey Vic, thanks for the questions! 🙌

Re: human-in-the-loop—it can’t directly escalate to a person (yet), but you can tag certain PRs or set custom rules so it knows when to flag things for manual review. We're working on deeper integrations here, so any feedback on what you'd want that to look like would be super helpful!

For compliance, yes—it supports suggestions for accessibility, security, privacy, etc. You can actually add your own custom checks if there are specific standards your team follows.

Chenjie Yuan
Wow that is so cool! I want to have a try now!
Paul Lundin

Love the UI advancements you all are outlining here, we need a lot more of this type of thinking now that we are post-AI and there is more code being generated than ever.

Any thoughts on how you this compares to greptile?

Paul Sanglé-Ferrière

@snowandcaffeine Great question!


Big props to greptile—they really helped kick off the AI code review space and got a lot of people thinking seriously about this problem. We’ve actually had quite a few teams join us after trying greptile (and other review bots), so I hear this comparison a lot.


The main things we keep hearing from users are:

  • Our AI catches more relevant, actionable review issues for their codebase.

  • The full end-to-end platform makes reviews way faster, especially after the AI comments. One thing folks seem to love: we automatically group and order PR files in the most logical way (eg. backend changes > API > UI), so on big reviews, you don’t waste any time jumping around.

We’re really trying to rethink the whole code review workflow from the ground up. I think that’s why teams like cal.com and others have switched over to mrge.


Our goal is to make reviewing and shipping code just as fast and smooth as writing it (kind of like how Cursor changed the writing workflow for devs).

Amir Banker

Hey! Congrats on the product launch — I really love what you’ve built.
I put together a quick redesign of your website to better showcase mrge's value. Thought you might like to check it out: https://www.figma.com/community/file/1499461191851219376/banker-portfolio

Erliza. P
Let's be honest, most human code reviews are just 'LGTM' anyway. Maybe it's time AI takes over?
Rauf Akdemir

Always amazed me how advanced code editors had become, but how medieval PRs still felt lol. Mrge saves us about 6 hours per week (2 dev team). Not taking into account the spotting of details that tend to create ugly bugs which cost a lot more time and energy to fix once deployed to production.

Paul Sanglé-Ferrière

Thanks @rauf_akdemir , that's great to hear!

Qingxuan(Nancy) Li

Absolutely love how mrge speeds up reviews without losing depth. Caught a few issues we might have missed otherwise. Feels like a real productivity unlock!

123
•••
Next
Last