Anna Jambhulkar

NEES Core Engine - Governed AI runtime for stable, traceable AI products

by
NEES Core Engine is a governed AI runtime that adds intent, identity, memory, policy, and traceability controls before AI models respond. Naina Persona is the live proof-of-concept showing NEES governance working inside a real AI companion app.

Add a comment

Replies

Best
Anna Jambhulkar
Hi Product Hunt 👋 I’m Piyush Jambhulkar, founder of Nainacore Emotional Tech. I built NEES Core Engine because I kept seeing the same problem with AI products: Prompts work in demos, but production AI needs more than prompting. When an AI product reaches real users, teams need to know: - what intent was detected - which identity/persona rules were applied - what memory boundaries were used - which policy controlled the response - why the output was allowed, blocked, or shaped - how to trace behavior later NEES Core Engine adds a governance layer before the AI model responds. It is not another chatbot. It is infrastructure for building AI products that are more stable, explainable, and controlled. For this Product Hunt launch, I’m showing NEES through Naina Persona — a live proof-of-concept AI companion app powered by the NEES governance runtime. What you can try: 1. Open the Naina Persona demo 2. Send a normal message 3. Observe how the app preserves identity, mode, session continuity, and governed behavior 4. Explore the NEES Cloud ecosystem and Core Engine API I’m especially looking for feedback from: - AI founders - indie hackers - developers building AI apps - teams worried about AI reliability, memory, and behavior control Would love your feedback: What kind of governance controls would you want before an AI model responds in your product?
Charan Achari

“Prompts work in demos, not in production” is very real. Governance as a runtime layer makes a lot of sense.

Anna Jambhulkar

@charan_achari Absolutely, Charan — that’s exactly the pain point NEES is built around.

Prompts are useful, but production AI needs structural controls around intent, identity, memory, policy, and traceability before the model responds.

Curious to hear your view: which governance layer feels most critical for production AI — policy checks, memory boundaries, traceability, or identity consistency?