All activity
AI systems today are unreliable the same input can produce different outputs, with no traceability or enforcement.
Manthan approaches this differently by treating decisions as deterministic, auditable, and enforceable units instead of probabilistic outputs.
It introduces decision infrastructure for AI systems where reliability is not optional.

ManthanDeterministic decision infrastructure for AI
Pavan Dev Singh Charakleft a comment
Hey everyone — Pavan here, founder of Manthan. I’ve been thinking deeply about one core issue: AI systems don’t actually make decisions ,they generate outputs. That makes them unreliable for anything critical. Manthan is an attempt to approach this differently by making decisions deterministic, traceable, and enforceable. This is still early, and I’d really value feedback from people building...

ManthanDeterministic decision infrastructure for AI
Pavan Dev Singh Charakstarted a discussion
Why AI needs deterministic decision systems
Most AI systems today don’t actually make decisions. They generate outputs. That means: Same input → different results No traceability No enforcement No guarantees This works for content. It breaks for systems. So the question is: Should we keep treating AI outputs as decisions? Or do we need a new layer where decisions are: • Deterministic • Auditable • Enforceable This is what I’m exploring...
