
Waydev
Tokenmaxxing? AI adoption? AI Impact? RL?
1.7K followers
Tokenmaxxing? AI adoption? AI Impact? RL?
1.7K followers
Waydev is the measurement layer for AI-written code. We track AI adoption, AI impact, and AI ROI across the full SDLC — from the first token consumed to the line shipped in production. Nine years building engineering intelligence. YC W21. Fortune 500.
This is the 11th launch from Waydev. View more

Waydev Agent
Launching today
Waydev measures the AI Adoption, AI Impact, and AI ROI of your engineering teams, copilots, and autonomous agents — across every tool from Cursor to Claude Code to Devin. Ask Waydev, our agent, turns engineering data into answers in plain English (and more). Skills let you configure it with SKILL.md files. MCP exposes your engineering feed to any external agent. Built for engineering leaders who'd rather have conversations than dashboards.





Free Options
Launch tags:Pitch NYC
Launch Team / Built With







Waydev
Hey PH 👋
Alex here, founder of Waydev. Thanks @rajiv_ayyangar for the hunt — and shoutout to the team at Deel for including us in Pitch by Deel NYC. Excited to be part of it.
Nine years ago I started Waydev because engineering leaders were flying blind on what their teams actually shipped. Now half the code is written by Cursor, Claude Code, Copilot, and Devin — and the question from CFOs is the same one in different clothes: is the seven-figure AI bill actually paying off?
Most teams don't know. They're guessing.
Waydev is the measurement layer for AI-written code. We track three things across the full SDLC:
AI Adoption — who's using which tool, how often, how deeply
AI Impact — does the AI-written code ship, get reverted, or rot in PR
AI ROI — dollars in vs. throughput out, across humans, copilots, and autonomous agents
Today we're shipping three things that change how engineering leaders interact with their data:
🤖 Ask Waydev — our agent. Ask in plain English, get a real answer. No more dashboards no one opens.
📄 Skills — configure Ask Waydev with SKILL.md files.
🔌 Waydev MCP — your engineering feed, exposed to any external agent. Plug it into Claude, Cursor, your internal tools — whatever's already in your stack.
The framing we keep coming back to: MCP is the data out. Skills are the instructions in. Ask Waydev is the conversation in the middle.
The bet: engineering leaders would rather have conversations than dashboards.
We've been covered by TechCrunch, TNW, and DevOps.com along the way.
I'll be in the comments all day. Roast the product, ask the hard questions. That's how this gets better.
— Alex
Questo
@rajiv_ayyangar @alex_circei good luck💪
Waydev
@rajiv_ayyangar @sebastian_maraloiu Thanks!
Archbee
@rajiv_ayyangar @alex_circei team waydev is on a roll - good luck!
Product Hunt
Waydev
@curiouskitty Good question — genuinely the hardest part of this category, and most answers in the market hand-wave through it.
Cost side: license seats + per-engineer token consumption across Copilot, Cursor, Claude Code, Windsurf, Devin. Not seat count × adoption rate — actual usage and spend, per person.
Outcomes side: throughput, cycle time, change failure rate, rework. Not acceptance rate. Acceptance rate is a vanity metric — it doesn't tell you whether the code shipped, was reverted, or caused an incident two sprints later.
Confounders: we don't claim RCT-grade causality. Anyone who does is selling. What we actually do is per-engineer longitudinal baselines (same person pre-AI vs post-AI), cohort matching on tenure and repo, and project-type tags. That gets you directional signal you can act on, not a regression coefficient you can publish.
Vs Jellyfish and Faros — Jellyfish's DNA is capacity allocation, Faros is a flexible data platform you query. Waydev is opinionated about the question: Adoption → Impact → ROI as three connected pillars, with the agent surfacing answers instead of you building dashboards to find them.
Happy to go deeper on any of this.
Product Hunt
@alex_circei Very interesting!
How do you actually trace a Copilot / Cursor / Claude Code / Windsurf / Devin suggestion through to those outcomes? The attribution chain could potentially be brittle — devs accept-then-edit, the tools don't uniformly tag commits, and the path from suggestion → merged PR → production incident two sprints out has a lot of breaks. Is it IDE-side instrumentation, inference from commit patterns + Git history, something the agent does post-hoc? And at what granularity are you confident the attribution holds up — PR, commit, hunk?
Related: the longitudinal baseline you mentioned — how long a pre-AI window do you need before the comparison is meaningful, and what's the move for engineers hired post-AI where there's no "before"?
The "AI Impact" piece is the hardest problem in this space and most tools quietly fudge it. Devs don't write code in clean buckets. They use Cursor to draft, rewrite half by hand, accept Claude's completion on three lines out of fifty, paste a ChatGPT snippet and edit it for an hour. What's Waydev's actual attribution method when AI authorship is partial, mixed, and deniable? IDE-plugin self-reporting, statistical fingerprinting on commits, or something else?
Waydev
@vincentf You're naming the exact problem we obsess over. Honest answer: there's no single source of truth, and anyone selling you one is fudging it.
Our stack is layered:
- Direct integrations with the AI tools themselves — Claude Code, Cursor, GitHub Copilot, Windsurf, Devin — pulling adoption, usage, and token data straight from each vendor's API.
- Entire integration (open source) on top, capturing the actual AI agent session content tied to commits. We surface this as AI Checkpoints inside AI Impact.
- Our own commit hook (`wd_commit_hook`) bridges Entire's session data into Waydev's commit tracking, so a session is linked to its resulting commit deterministically, not inferred.
- Code-to-Production then maps that AI-touched commit through to a deploy.
- Token Usage and Vendor ROI handle the cost side.
The honest gap: "ChatGPT in a browser, paste, edit for an hour" is the case where there's no telemetry to capture. We don't pretend to detect that. It either gets self-attributed or it doesn't get tracked. We'd rather report "unknown" than fabricate a confidence score.
Docs: https://docs.waydev.co/v5.0/docs/start-guide
Waydev
@grey_seymour Fair criticism. The honest reason pricing isn't public: scope varies a lot (integrations, AI tools tracked, scale) and any single number we put on the page would either underprice the high end or scare off the low end. But that's our problem to solve, not a reason to make you eat seven days of uncertainty.
DM me - team size, AI tools you're using, rough scale. I'll send a number same day, before any trial work. If it doesn't fit, you've lost two minutes. No call, no lock-in surprises, no per-seat trapdoors. We will give you a special PH discount!
the 'ai adoption vs ai impact vs ai roi' split is the right decomposition. most teams collapse all three into 'are devs using cursor' which tells u nothing about whether the code actually shipped or got reverted. measuring the impact layer is where this gets interesting
Waydev
Waydev
This is super relevant right now. Everyone is pouring money into AI, but barely anyone knows what it’s actually returning. Love the shift from dashboards to conversations, feels way more natural for how teams want to work.
Waydev