Launching today
Monostate
Vibe Training AI models
16 followers
Vibe Training AI models
16 followers
Monostate is an all-in-one AI training platform. Fine-tune LLMs with your own data using SFT, DPO, or RLHF — no training scripts required. Compare commercial and open-source models side by side with built-in benchmarking. Deploy to GPUs (A100s to H100s) with one click and autoscaling. Supports LoRA, QLoRA, and full parameter training across dozens of architectures. Works with Llama, Mistral, Phi, Qwen, and more. From data to production in minutes, not weeks.















Monostate AItraining
Hey everyone! I'm Andrew, the builder behind Monostate. Some of you might know me from AITraining (open-source CLI trainer) — Monostate is the next step. I built AITraining because I was tired of trainer boilerplate. But once I had models, I realized the rest of the workflow was just as fragmented — benchmarking meant a different tool, deployment meant SSH-ing into GPU boxes, and comparing models meant juggling notebooks. Every ML team I talked to had the same problem: 5+ disconnected tools duct-taped together.
Monostate puts it all in one place:
- No-code fine-tuning — SFT, DPO, RLHF, reward modeling. Configure through UI, no training scripts.
- Multi-model benchmarking — compare accuracy, latency, and cost across commercial + open-source models side by side.
- One-click GPU deployment — A100s to H100s with autoscaling. No SLURM, no cloud provider negotiations.
- Visual pipeline builder — chain specialized models together with drag-and-drop (coming soon).
It supports LoRA, QLoRA, and full parameter training on Llama, Mistral, Phi, Qwen, and more. Free tier available to get started.
Would love feedback — what's the most painful part of your current ML workflow? We're actively building based on what users tell us.
Please expect things to be brittle. This is a open beta and things will break. That's why my WhatsApp and email are right there on the app. If something goes wrong, reach out and I'll help you personally.