All activity
Jordan Carrollleft a comment
This is a great idea especially for startups! I turned my attention recently to using codex and the aws cli to be able to figure out how much new infra is going to cost. Can Recost tell you how much staged changes are roughy going to cost before deploying them?

RecostYour API costs fully visible.
Jordan Carrollleft a comment
Something I have been wondering but would to get a wider take: "Do you trust AI in code reviews yet, or still skeptical? What would give you the confidence to rely on it?"

RaptorCICatch risky code changes and weak tests before they ship
Jordan Carrollleft a comment
Hey Junu, this is a great idea as I'm just launching and would love to know more on how users will use my landing page. I just signed up but now I'm stuck on a blank screen. Any suggestions?

Convert or NotSimulate first-time users. See why they drop off
Jordan Carrollleft a comment
Hey David, I've been looking for something along these lines because I'm not finding the Datadog alternative very useful. Maybe a bit nosey but our of interest how does this work without cookies?

ClickportThe modern, powerful Google Analytics alternative
Jordan Carrollleft a comment
Hey! This is pretty cool! Can I ask how it differs from the likes of ngrok?

SmugglShare your localhost as an invite only link
Jordan Carrollleft a comment
Hey! This is super interesting to me as someone who has brought a lot of ai products into my businesses but I have one question. I spend a lot of time refining guardrails for a project, goals, acceptance criteria and describing the workflow such as following TDD. Is there a way to set this up so that some of it is reusable across projects and code bases like we have with AGENTS.md today?

DoingVoice and visual context for AI builders. No subscription.
Jordan Carrollleft a comment
Hey everyone 👋 I’m Jordan, founder of RaptorCI. I built this after repeatedly seeing the same issue while working on production systems — changes would pass code review and CI, but still cause problems in production. Reviews focus on correctness, CI gives pass/fail, but neither answers “what could this actually break?” RaptorCI is my attempt to solve that. It analyses pull requests and...

RaptorCICatch risky code changes and weak tests before they ship
RaptorCI focuses on risk, not output. While most tools generate comments, rules, or pass/fail checks, they don’t show what could actually break. RaptorCI analyses pull requests to identify high-impact changes, explains their potential impact, and gives a clear signal of how safe a change is to ship.
Built after seeing risky changes repeatedly slip through review in production systems, it’s already being used by teams reviewing real pull requests and iterating quickly based on feedback.

RaptorCICatch risky code changes and weak tests before they ship

