PickLLM

PickLLM

Compare GPT models side by side with your own prompt and key

2 followers

PickLLM lets you compare OpenAI model outputs side by side using your prompt. Supports GPT-4.1, GPT-4o, o4-mini, o1, and more. Uses your API key via a minimal proxy (not storing key). Customize models and settings. Built for quick, no-setup testing.
PickLLM gallery image
Free
Launch Team / Built With
AssemblyAI
AssemblyAI
Build voice AI apps with a single API
Promoted

What do you think? …

Denis Leonov
👋 Hi Product Hunt! I built PickLLM as a lightweight tool to compare OpenAI model outputs side by side using the same prompt. If you’ve ever wondered how gpt-4.1, gpt-4o, o1, o3-mini, gpt-4.1-mini, or o4-mini differ - this tool gives you a visual way to test and see for yourself. 🧠 Why I built it Choosing the "right" model just from OpenAI’s docs often isn't enough. I wanted a way to run a prompt through multiple models at once and quickly compare responses, in terms of quality, latency, and behavior. So I built this tool for myself, and figured others might find it useful too. ⚙️ How it works 1. Add your API key 2. Write your prompt 3. Select any set of models (default list includes 9, but you can edit freely) 4. Run the comparison and see all outputs side by side 5. (optional) There are additional settings such as temperature, top-p, and max tokens. You can choose from: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o1, o1-mini, o3-mini, o4-mini More models can be added as OpenAI expands availability. 🔐 API Key Handling You'll need to provide your own OpenAI API key. The app uses a minimal proxy via Next.js to route requests, but the key is never stored or logged. The code is open if you want to verify that. 💸 Token Usage Running 6–9 models in parallel adds up. In my own testing, a single full run cost ~$0.08 worth of tokens. That’s why I couldn't offer even a free test. 🛠️ Built With Built in ~4 hours with bolt.new: TypeScript, Next.js, Tailwind Minimalist setup, no backend logic beyond routing 👉 https://pickllm.com Would love your thoughts and feedback!