How would you actually use Alpie day to day?
We are building Alpie, but usage matters more than features. Imagine you open Alpie every day, what are you coming here to do? • Ask questions and get fast answers • Collaborate with others • Track progress via a dashboard • Follow discussions/topics • Something else? Be specific if you can: “I’d use it to ___, ___ times a day.” This will directly shape what we build next 👇
Try Alpie Core in a full workspace with files, research & collaboration
Hey everyone đź‘‹ Thank you again for the support on Alpie Core, and the feedback from this community meant a lot to us. Since then, we have finally released Alpie, our most advanced product yet. A full AI workspace where you can now see Alpie Core working in real workflows, and not just isolated prompts. You can use the model with files and PDFs, run research, collaborate with others in shared...


Python SDK + CLI for Alpie Core are live (sync, async, streaming)
Hey Builders 👋 We have just released the official Python SDK and CLI for Alpie Core, our 32B reasoning model trained and served entirely at native 4-bit precision. The goal was simple: make it genuinely easy to build, test, and ship with a reasoning model in real-world systems, not just demos. What’s included in the first release: Clean Python SDK with sync, async, and streaming support A...
Something odd we noticed with a 4-bit reasoning model
While testing Alpie Core beyond benchmarks, we noticed something unexpected. On tasks like step-by-step reasoning, reflective questions, and simple planning (“help me unwind after work”, “break this problem down calmly”), the model tends to stay unusually structured and neutral. Less fluff, less bias, more explicit reasoning. It made us wonder if training and serving entirely at low precision...
What would you build or benchmark with 5M free tokens on a reasoning model?
To encourage real experimentation, we’re offering 5 million free tokens on first API usage so devs and teams can test Alpie Core over Christmas and the New Year. Alpie Core is a 32B reasoning model trained and served at 4-bit precision, offering 65K context, OpenAI-compatible APIs, and high-throughput, low-latency inference. If you were evaluating or using a model like this: – What would you...


