Launching today

Coderrr
Open source CLI-first AI coding companion
63 followers
Open source CLI-first AI coding companion
63 followers
A powerful CLI tool that writes, debugs, and ships code alongside you. Like Claude Code, but free and open source. AI-powered coding agent that analyzes tasks, creates actionable plans, and executes commands.





Coderrr
Hey Product Hunt 👋
I’m Akash, the maker of Coderrr.
Coderrr started as a side project because I loved tools like Claude Code and OpenAI Codex, but wanted a free, open-source, CLI-first alternative that developers could actually inspect, modify, and extend.
Coderrr runs directly in your terminal and acts like an AI coding partner — it can understand your codebase, help generate or refactor code, explain bugs, and automate common dev tasks without needing a heavy IDE setup.
What surprised me is that after submitting this to an open-source contribution event, it started getting real traction from the community — which pushed me to polish it and launch here.
This is still early, and I’d genuinely love your feedback:
What workflows should Coderrr support next?
What annoys you most about existing AI coding tools?
What would make this part of your daily dev setup?
If you like open-source, CLI tools, or AI for developers, I’d love for you to try it out.
Thanks for checking it out 🙌
— Akash
Huge congrats on the launch! 🎉 Coderrr looks like a super practical CLI‑first companion for real-world shipping, not just toy snippets—excited to see how it fits into devs’ daily workflows.
Coderrr
@zeiki_yu Thanks a lot! I really appreciate it!
Kindly share this with any tech enthusiast in your circle so they can try this out and give feedbacks 😊
big fan of the CLI-first approach for those of us who live outside VS Code. since it's open source, does this support plugging in local models (like via Ollama) to keep the whole workflow offline and free, or is it tied to specific APIs?
Coderrr
@samet_sezer
Apart from default API, Coderrr supports different other providers (like openai, anthropic, Ollama, Openrouter) that user can configure based on their own need. You can have a local model serving in your system and use Ollama provider to keep everything offline and contained in your system.
Check https://coderrr.aksn.lol/docs/providers for more details about providers
@samet_sezer Nice — local/Ollama support is a strong differentiator here. The real test, though, will be how well Coderrr handles context drift in long-lived CLI sessions (monorepos, multi-branch work, partial diffs).
Do you persist session memory across terminals, or is everything purely per-run?
Coderrr
@samet_sezer @gnizdoapp Right now it doesn't handle session memory across terminals. But implementation of this feature is ongoing, and hopefully will be ready to ship within next week
How is this different from opencode for instance?
@janschutte Good question. From what I see, the key difference isn’t “better AI,” but CLI-first + fully inspectable stack. opencode leans more toward hosted/GUI workflows, while Coderrr feels built for engineers who want scripts, reproducibility, and local control.
Akash — what’s the single architectural choice that makes Coderrr fundamentally different under the hood?
The CLI-first vibe is great for speed. I'm curious—how does Coderrr handle context retrieval for large codebases compared to heavier IDE plugins?
Congratulations on the launch, I guess the only question I have is how does it compare to OpenCode? which is the one I was initially thinking of using for local models now that my new Macbook is on its way. I'll test both once it arrives anyway but I want to know if there's already different workflows or features between the two