All activity
Stop copy-pasting prompts across ChatGPT, Claude, and Gemini. PromptDiff compares LLM outputs in one API call.
Send a prompt + pick models. Get back: output, latency (ms), tokens, and cost (USD) per model.
8 models, 4 providers:
- Claude Sonnet & Haiku
- GPT-4o & 4o-mini
- Gemini Pro & Flash
- Grok 3 & 3 Mini
No SDK needed. Works with curl, Python, TypeScript.
Free: 100 evals/month, no card. Built by a solo dev from Tokyo.

PromptDiffCompare LLMs across models. One API call.
Maiki Takanoleft a comment
Hey Product Hunt! I'm Maiki — solo developer based in Japan. I built PromptDiff in 2 days as a side project, and I wanted to share the story behind it. The problem I kept hitting Every time I built something with LLMs, I had the same frustrating workflow: open ChatGPT, paste prompt, copy output. Switch tab, open Claude, paste again. Switch tab again for Gemini. Then try to compare three walls...

PromptDiffCompare LLMs across models. One API call.
