To encourage real experimentation, we re offering 5 million free tokens on first API usage so devs and teams can test Alpie Core over Christmas and the New Year.
Alpie Core is a 32B reasoning model trained and served at 4-bit precision, offering 65K context, OpenAI-compatible APIs, and high-throughput, low-latency inference.
If you were evaluating or using a model like this: What would you benchmark first? What workloads matter most to you? What comparisons would you want to see?