QTune

QTune

OpenSource web application for fine-tuning language models

20 followers

QTune is a comprehensive web application for fine-tuning language models on consumer GPUs with as little as 8GB of VRAM. Contact developer in Telegram: https://t.me/xkcd0000 Telegram channel: https://t.me/curseknowledge
QTune gallery image
QTune gallery image
QTune gallery image
QTune gallery image
Free
Launch Team / Built With
Anima - OnBrand Vibe Coding
Design-aware AI for modern product teams.
Promoted

What do you think? …

Ruslan Koroy
Maker
📌
Hey everyone! I’m Ruslan, the creator of QTune — a web app built with Gradio to simplify the fine-tuning of language models right on your local machine. It supports QLoRA-based workflows on GPUs with as little as 8 GB VRAM, integrates with Hugging Face, OpenRouter (for dataset generation), and even offers model conversion to GGUF format or direct deployment via Ollama. QTune provides an intuitive, end-to-end pipeline: choose your model, prepare or upload a dataset, configure training parameters, and monitor live training logs—all from your browser. I built this because I wanted something accessible, transparent, and efficient for developers, educators, and hobbyists alike. I’d love to hear your feedback, ideas, or feature requests. Happy to help if you run into any issues setting it up—just drop a note here! Thanks for checking out QTune, and I hope it makes LLM fine-tuning more approachable for everyone.
Azra Malek

Wow, this is just what I needed! I’ve been a bit hesitant to dive into finetuning, but this looks totally manageable! 👏

Rabin Quest

This is super exciting! Finally, a way to train LLMs without having to sell a kidney for an A100! 😂

Linda Wick

As someone new to ML, I’ve always felt lost in the command line chaos… having a browser-based UI is such a breath of fresh air.

Ashley Wood

I just tried setting it up, and it actually runs on my 3060 (8GB). Mind = blown!

Vaneza Zan

I love that it integrates QLoRA. Low VRAM finetuning is 🔑 for hobby projects.

James Lee

Quick question — does it support multi-GPU setups, or is it just for single consumer cards?

12
Next
Last