Hey everyone
Thank you again for the support on Alpie Core, and the feedback from this community meant a lot to us.
Since then, we have finally released Alpie, our most advanced product yet. A full AI workspace where you can now see Alpie Core working in real workflows, and not just isolated prompts. You can use the model with files and PDFs, run research, collaborate with others in shared chats, and keep long-running context organised.
If you were curious how Alpie Core performs beyond single queries, this is where you can try it hands-on.
Alpie Core
Hey builders
Modern AI keeps getting better, but only if you can afford massive GPUs and memory. We didn’t think that was sustainable or accessible for most builders, so we took a different path.
Alpie Core is a 32B reasoning model trained, fine-tuned, and served entirely at 4-bit precision. It delivers strong multi-step reasoning, coding, and analytical performance while dramatically reducing memory footprint and inference cost, without relying on brute-force scaling.
It supports 65K context, is open source (Apache 2.0), OpenAI-compatible, and runs efficiently on practical, lower-end GPUs. You can use it today via Hugging Face, Ollama, our hosted API, or the 169Pi Playground.
To keep you building over Christmas and the New Year, we’re offering 5 million free tokens on your first API usage, so you can test, benchmark, and ship without friction.
This launch brings the model, benchmarks, api access, and infrastructure together in one place, and we’d love feedback from builders, researchers, and infra teams. Questions, critiques, and comparisons are all welcome as we shape v2.