



Just launched PromptPerf 2.0 on Product Hunt — test your prompts on 100+ AI models free
Hey everyone! 7 months ago I launched PromptPerf on Product Hunt and got feedback that changed everything: - "Sadly no Gemini" → Now there's 100+ models - "API key friction" → Now works without signup - "Need multi-model comparison" → Now compare 5 models side-by-side Today we're back with 2.0 and I built exactly what the community asked for. What it does: You enter your actual prompt + what a...

A Big Thank You! and a Big Ask
Thank you everyone for the support. I have received nearly 40 signups and 1 paid user which is massive for me as I am still on the early stages of validating the product. So thank you everyone. Next steps: Even though the signups are coming in I am tracking the usage of the app and I dont see many users running the evaluations and I need help. How should I get you to try and test the product....
Building with AI? Then you know this pain.
Prompt A + Model B = Output C... Until it suddenly doesn't. Same input. Same config. Totally different output. Because LLMs aren’t deterministic and when your product depends on stable responses, it feels like building on sand. I got tired of playing prompt roulette. Tired of: Copy-pasting prompts into playgrounds Switching models manually Changing temperature settings Scoring responses by eye...







Still manually tweaking AI prompts & praying for the best?
I've completed my proof-of-concept for the AI evaluation tool - successfully testing prompts across configs (Models x Temps x Runs). https://promptperf.dev/ Hi, I am building a AI Model Prompt Optimising tool that will run through multiple AI models across various temperature settings for X number of runs. This saves time doing multiple tests, example user wants to test 3 questions/prompts for...


