Launching today

TokenCount Context Bundler
Save 90% AI tokens via Semantic Dehydration & .cursorrules
1 follower
Save 90% AI tokens via Semantic Dehydration & .cursorrules
1 follower
Stop paying for wasted tokens. ContextBundler "dehydrates" your entire repo into logic-aware AI context. It prunes JSDoc, logs, and boilerplate while keeping logic 100% readable for Cursor and Claude. Features built-incursorrules generation.


Hey Product Hunt! 👋 I’m Justin, the maker behind JustinXai Labs.
A few weeks ago, my Cursor/Claude bill hit triple digits. I realized 80% of what I was feeding the AI was just "token garbage"—massive JSDocs, redundant logs, and empty lines that the AI didn't actually need to "see" the logic.
So I built ContextBundler (and the TokenCount matrix).
Unlike simple file-mergers, it uses a Semantic Skimming algorithm. It prunes the implementation fluff but keeps the "logic map" intact, slashing token usage by up to 90% without breaking the AI's understanding.
What’s in the Matrix?
✅ CLI: npx @xdongzi/ai-context-bundler@latest .
✅ VSCode: Lives in your sidebar for instant skimming.
✅ Chrome: Grab clean Markdown from heavy docs (like react.dev).
🎁 LAUNCH GIFT: I’ve unlocked all Pro features for 50%OFF today to celebrate our launch!
I'd love to get your feedback: What’s the messiest repo you’ve tried to feed into an LLM? Let me know in the comments! 🛡️
Stop the subscription fatigue. 🛑 We are 100% local, no server costs, so we only charge a ONE-TIME $5 fee for the Pro Pass. Use code PH50SKELETON for 50% off during launch. Buy once, save tokens forever. 🛡️