Monostate AItraining

Monostate AItraining

Fine-tuning, RL, and inference in one CLI

19 followers

Fine-tune LLMs and ML models with automatic dataset conversion, hyperparameter sweeps, and custom RL environments - monostate/aitraining
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Monostate AItraining gallery image
Free
Launch tags:Artificial Intelligence•GitHub•Tech
Launch Team
OS Ninja
OS Ninja
Explore and Learn Open Source using AI
Promoted

What do you think? …

Andrew Correa
Hey everyone! I'm Andrew, the dev of AITraining. I built this because I kept losing time to trainer boilerplate instead of actually iterating on models. The other frustration was hardware—code that worked on NVIDIA would break on my Mac's MPS, and tools like HuggingFace's Autotrain didn't handle these edge cases well. AITraining wraps all of that into a CLI wizard that walks you through model selection, dataset conversion (auto-detects 6 formats), and training config. It supports SFT, DPO, ORPO, PPO, reward modeling, and knowledge distillation. After training, aitraining chat lets you test and compare iterations locally. Works on consumer hardware—auto-detects Apple Silicon vs CUDA and optimizes accordingly. Built on HuggingFace's ecosystem and open source (Apache 2.0). Docs available in English, Spanish, Chinese, and Portuguese. Would love to hear what training workflows or features you'd find useful. PRs welcome!