FLAP
p/flap
Fine-tune any LLM (100B+) on your GPU zero cloud costs
0 reviews2 followers
Start new thread
trending

FLAP - Fine-tune any LLM (100B+) on your GPU zero cloud costs

Fine-tune any LLM on your local GPU no cloud required. FLAP uses memory-mapped sharding to train models from 1B to 670B+ on as little as 6 GB VRAM. Your data never leaves your machine. No per-hour GPU bills. No vendor lock-in. ✓ 21.5× faster than traditional fine-tuning ✓ ~95% cost reduction vs cloud APIs ✓ Supports Llama, Mistral, Qwen, and more ✓ Free tier - No credit card.