Aleksandar Blazhev

Graphbit PRFlow - AI code reviewer that catches what others miss

Your AI teammate that reviews every pull request before it ships. Tested on 10 real projects, PRFlow found 7 critical security issues where competitors found zero. Learns your team's standards over time. Pay per review, not per seat.

Add a comment

Replies

Best
Musa Molla

Thanks everyone, I'm Musa, founder of GraphBit.

We built PRFlow after getting frustrated with AI code reviewers that flood your PR with noise, miss the issues that actually matter, and feel different every time they run.

The market has options. We know that. We built PRFlow anyway because none of them solved the core problem: consistency and cross-file context in a single pass.

PRFlow is a deterministic baseline reviewer that lives inside GitHub. Open a PR and a structured review posts in minutes, every time, with the same output. It traces the exact function that changed across cross-file dependencies, not just the diff lines. That is how it caught 14 security issues on a PR where another tool found zero.

We benchmarked PRFlow on 10 real public pull requests. Rated 4.3/5 on average. Every review is live on GitHub and readable right now.

PRFlow handles the baseline so your team focuses on architecture, intent, and edge cases. Not repeated first-pass checks.

There are other tools. Try PRFlow on a real repo and see the difference yourself. We read every comment.

Andrรฉ J

@musa_mollaย Yes. Very noizy. You need a master PR reviewer jsut to review the AI reviewers ๐Ÿ˜….... Is graphbit free for public repos? id love to try it on one particular public repo thats launching soon on PH https://github.com/eoncode/runner-bar/

Musa Molla

@conduit_designย Haha exactly, reviewing the AI reviewer is a real problem ๐Ÿ˜„

Not free for public repos, but we have a launch offer running right now first 2,000 users get 200,000 tokens and 20 free tracings. That's more than enough to put it through its paces on runner-bar.


Sign up at platform.graphbit.ai and give it a proper test. Would love to hear what it catches

Andrรฉ J

@musa_mollaย I do 300 commits a day. It would last me 45min ๐Ÿ˜… this is the agentic age. we do 10x the work now. Quality PR Review is great. but in the agentic age. it would cost too much. Do you have some sort of plans to address this tension?

Musa Molla

@conduit_designย 300 commits a day! okay that's a different scale entirely ๐Ÿ˜„

You're pointing at something real. The coin model works for standard team velocity but agentic pipelines change the math completely.

We're thinking about smarter triggering, reviewing at meaningful checkpoints like pre-merge or when specific file types change, rather than every single commit. That's the direction that makes sense for this use case.

Would love to understand your workflow better. The agentic scale problem is one we want to solve properly

MD Amirul Islam

Really like the direction here. Most teams already have code review processes in place, but review fatigue and repetitive comments still slow things down a lot.

What stood out to me about PRFlow is that it seems focused on improving reviewer focus instead of trying to fully replace human reviews. That balance is important for engineering teams.

Curious to see how teams integrate this into their existing PR workflow over time. Congrats on the launch ๐Ÿ‘

Musa Molla

@1mirulย Exactly the balance we were going for. PRFlow handles the repetitive stuff so the senior devs can focus on what actually needs their eyes. Thanks for getting it

Rupak Chandra Bhowmick

Thanks, @1mirul. Thatโ€™s very much the design philosophy behind PRFlow. Weโ€™re not trying to replace human review, but to automate the repetitive process so engineers can focus on architectural decisions, business logic, and edge cases. The goal is to make review workflows more consistent inside GitHub while keeping humans in control of the final judgment.

Vikram

the 'deterministic baseline' part is what caught my eye. usually ai reviewers feel like a coin toss one day itโ€™s strict, the next itโ€™s lazy. having a consistent output makes it much easier to integrate into a real team's workflow without the senior devs getting annoyed. support on the ship @musa_molla

Musa Molla

@vikramp7470ย Coin toss, that's exactly it ๐Ÿ˜„ That frustration is why we built it this way. Same PR, same review, every time. Thanks for the support!

Vikram

@musa_mollaย Thatโ€™s exactly the kind of reliability dev teams need. Great approach ๐Ÿ‘Œ

Rupak Chandra Bhowmick

Totally agree, @vikramp7470ย . Thatโ€™s exactly the bar weโ€™re aiming for. Deterministic behavior makes the review process much easier to adopt because teams know what kind of feedback to expect each time. Trust is hard to earn with AI tools, so consistency was a big design priority for us.

Tanjum ๐Ÿ”ฅ ๐Ÿš€๐Ÿš€
Really like this approach to AI-assisted code reviews. Instead of replacing engineers, GraphBit PRFlow seems focused on reducing repetitive review noise and helping teams stay focused on meaningful feedback. Cleaner PRs and faster reviews can make a huge difference for engineering teams over time. Curious โ€” how are you handling context awareness across larger PRs or multi-file changes? Congrats on the launch ๐Ÿš€
Musa Molla

@tanjumย That's exactly the balance we were going for, baseline handled automatically so engineers stay focused on what needs human judgment.

On larger PRs and multi-file changes: PRFlow traces the exact function that changed and follows its dependencies across files in the same PR. Token budget is managed so larger PRs don't get shallow reviews. The depth stays consistent regardless of PR size

Rupak Chandra Bhowmick

@tanjumย Thanks. For bigger PRs, PRFlow builds context in layers using our own context engine. It extracts structured context from each changed file, enriches that with cross-file dependencies, and then reviews the PR as a whole rather than one file at a time. For very large PRs, it also uses token budgeting and file prioritization so the review stays focused and useful.

Aleksandar Blazhev

Really happy to be the hunter for PRFlow today.

The team spent serious time building an AI-native code review agent focused on consistent code review at scale, cross-file context, and detecting security vulnerabilities in pull requests before they ship.

For teams dealing with PR review bottlenecks, this is a product that genuinely deserves a closer look.

Musa Molla

@byalexaiย Thank you for supporting us today, means a lot to the whole team ๐Ÿ™

Imrul Kayes

Hey PH ๐Ÿ‘‹

I'm Imrul, Business Development Lead at GraphBit and part of the maker team behind PRFlow.

Before this, I spent years working closely with engineering teams across different stages of growth. The pattern I kept seeing was the same: teams slowing down not because their engineers were bad, but because code review had become the bottleneck.

A senior engineer drowning in PRs can't review everything properly. A junior developer waiting days for feedback loses momentum. The review process that was supposed to protect code quality was quietly killing team velocity.

Here's what's interesting:

๐Ÿ” Most AI code reviewers read what changed. They scan the diff and stop there. But the most dangerous bugs - XSS, auth bypass, race conditions - don't live in a single file. They live in how files connect.

โš™๏ธ PRFlow reads the function that changed and traces its dependencies across every file in the PR. That's how it caught 14 security issues on a PR where every other tool found zero.

The problem was never that developers write bad code. It's that no tool could see the full picture until now.

We benchmarked PRFlow against the leading tools on 10 real public PRs. 4.3/5 vs 2.5/5. Every review is live on GitHub. You can read them right now.

Code review is infrastructure. It should be consistent, context-aware, and trustworthy, not a coin toss.

That's what we built.

Have a great launch day everyone! ๐Ÿš€
โ€” Imrul

Abdul Rehman

Love the baseline approach.
@musa_molla , Congrats on the launch! Can it run on every push or just on PR open?

Rupak Chandra Bhowmick

@abod_rehmanย Yes, it can run on every push to an open PR, not just when the PR is first opened. Right now PRFlow triggers on PR open, on new commits pushed to the branch, and when a draft is moved to ready for review.

Musa Molla

@abod_rehmanย Thank you! PRFlow triggers on PR open and every push to an open PR, so automated pull request review happens continuously throughout the lifecycle, not just at the start. Every new commit gets a fresh pass. No gaps in coverage

Alexis Rodriguez

Does your benchmark include PRs with generated code or vendored dependencies?

Rupak Chandra Bhowmick

@alexis_rodriguez7ย Mostly no. PRFlow filters out a lot of low-value review surface by default, including dependencies, build artifacts, binary/non-code files, and it also supports repo-level ignore rules for auto-generated or vendored paths. So our benchmark focus was on reviewable PR code, not noise from vendored or generated files.

Musa Molla

@alexis_rodriguez7ย Good question. Our benchmark used real open-source PRs, generated files and vendored dependencies are automatically detected and skipped. PRFlow only reviews code your team actually wrote

Cam Pritchard

Congrats guys, looks exciting because most AI reviewers just skim the diff and miss the bigger picture. The cross-file dependency mapping is the real unlock, and 14 issues caught where another tool found zero is a serious proof point. Excited to see this grow!

Musa Molla

@campritchardย That's exactly it, the diff is just the entry point. The bug lives in what the change touches downstream. Glad that landed clearly ๐Ÿ™

Rupak Chandra Bhowmick

@campritchardย Exactly. The diff is usually just the starting point, not the full problem surface. A lot of the real issues only show up once you trace what the change affects across related files. Glad that part came through clearly.

Lokesh Motwani

Congratulations on the launch. I am a solo dev building Badge, I will definitely try this for my repo as an AI reviewer. One question - you said you save cross file context in a single pass, one obvious questions comes - how do you deal with loss in the middle, because that directly translates to misses in review.

Musa Molla

@lokesh_motwani1ย Great question and glad you're going to try it on Badge.

The single pass doesn't mean one giant context window. PRFlow extracts only the relevant function scope and its cross-file dependencies before sending to the model, so the actual input is tight and focused, not a full repo dump. That's what keeps the middle from getting lost.

Token budgeting handles the rest, larger PRs get prioritized by semantic significance rather than being truncated blindly.

Important thing is, no system is perfect on very large PRs, but the extraction step before the model call is what keeps the signal-to-noise ratio high

Lokesh Motwani

@musa_mollaย Loved your approach on this. Thank you for sharing details.

Rupak Chandra Bhowmick

@lokesh_motwani1ย Good question. We try to reduce that risk before the model call, not after it. PRFlow does structured context extraction first, adds cross-file dependency context, then applies per-file token budgets and memory budgets so one large file does not crowd out the rest of the PR.

If the PR is too large, we prioritize the reviewable files instead of pretending nothing gets lost. So the approach is basically controlled compression plus prioritization, not โ€œthrow the full diff into one prompt and hope for the best.โ€

1234
Next
Last