trending
Chirag Arya

1h ago

Try Alpie Core in a full workspace with files, research & collaboration

Hey everyone

Thank you again for the support on Alpie Core, and the feedback from this community meant a lot to us.

Since then, we have finally released Alpie, our most advanced product yet. A full AI workspace where you can now see Alpie Core working in real workflows, and not just isolated prompts. You can use the model with files and PDFs, run research, collaborate with others in shared chats, and keep long-running context organised.

If you were curious how Alpie Core performs beyond single queries, this is where you can try it hands-on.

Chirag Arya

2d ago

Python SDK + CLI for Alpie Core are live (sync, async, streaming)

Hey Builders

We have just released the official Python SDK and CLI for Alpie Core, our 32B reasoning model trained and served entirely at native 4-bit precision.

The goal was simple: make it genuinely easy to build, test, and ship with a reasoning model in real-world systems, not just demos.

What s included in the first release:

Chirag Arya

13d ago

What would you build or benchmark with 5M free tokens on a reasoning model?

To encourage real experimentation, we re offering 5 million free tokens on first API usage so devs and teams can test Alpie Core over Christmas and the New Year.

Alpie Core is a 32B reasoning model trained and served at 4-bit precision, offering 65K context, OpenAI-compatible APIs, and high-throughput, low-latency inference.

If you were evaluating or using a model like this:
What would you benchmark first?
What workloads matter most to you?
What comparisons would you want to see?

Chirag Arya

13d ago

Something odd we noticed with a 4-bit reasoning model

While testing Alpie Core beyond benchmarks, we noticed something unexpected.

On tasks like step-by-step reasoning, reflective questions, and simple planning ( help me unwind after work , break this problem down calmly ), the model tends to stay unusually structured and neutral. Less fluff, less bias, more explicit reasoning.

It made us wonder if training and serving entirely at low precision changes how a model reasons, not just how fast it runs. Sometimes the chain of thought itself is something you d actually want to read to understand the reasoning behind the final answer.

Chirag Arya

13d ago

Alpie Core - A 4-bit reasoning model with frontier-level performance

Alpie Core is a 32B reasoning model trained, fine-tuned, and served entirely at 4-bit precision. Built with a reasoning-first design, it delivers strong performance in multi-step reasoning and coding while using a fraction of the compute of full-precision models. Alpie Core is open source, OpenAI-compatible, supports long context, and is available via Hugging Face, Ollama, and a hosted API for real-world use.