Launching today
InsAIts

InsAIts

Open-source monitoring for AI-to-AI, detect hallucinations

2 followers

Agents contradict facts, fabricate citations/URLs/DOIs, lose confidence, and spread errors silently, one agent's hallucination becomes another's "truth." InsAIts: 5 hallucination subsystems (cross-agent contradictions, phantom citations, document grounding, confidence decay, self-consistency) + 6 anomalies. Features: Open-source core Privacy-first: all local 3-line setup, any LLM/Ollama Integrations: LangChain, CrewAI, LangGraph Slack/Notion exports, forensic tracing
InsAIts gallery image
InsAIts gallery image
InsAIts gallery image
Free Options
Launch Team / Built With
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Patrick
Maker
📌
Hi Product Hunt! I'm the creator of InsAIts. I built this because I kept seeing the same problem across every multi-agent AI system I worked with: agents pass bad information to each other, and there's no monitoring layer to catch it. Today we're open-sourcing the core under Apache 2.0. The "aha moment" was when I watched a finance pipeline where one agent hallucinated a 5x cost difference. It propagated through three more agents before reaching the output. Nobody caught it because nobody was monitoring the AI-to-AI channel. InsAIts V2.4 adds deep hallucination detection -- specifically designed for the unique problems that emerge when AI agents communicate: 1. Cross-agent contradictions (the big one -- no other tool catches this) 2. Phantom citations (fabricated URLs, DOIs, paper references) 3. Source grounding (are responses actually based on your documents?) 4. Confidence decay (is the agent losing certainty over time?) Everything runs locally. We never see your data. The API key is only for usage tracking. **Open-core model:** The core (anomaly detection, hallucination detection, forensic tracing, dashboard, all integrations) is Apache 2.0 open-source. Premium features (adaptive dictionaries, advanced detection, auto-decipher) ship with pip install -- proprietary but included in the package. You can also choose your own Ollama model for local processing. I'd love to hear from anyone building multi-agent systems. What failure modes have you encountered? What would you want monitored?